Fine Time At The Podcast

Thanks to Vivienne and the team at The Healthcare Quality Podcast.  I had a great time learning about the specifics of podcasting and appreciated the help to muddle through the talk!

Look forward to working with you all in the future…I wonder how many times I used the word “share” at the beginning of the cast!

I count 4 times in 30 seconds…how many times do you think I over-used that word?  “Share” your thoughts anytime!

What You Should Know About Gamification In Healthcare

By:  David Kashmer, MD MBA FACS (@DavidKashmer)


Why bother with gamification?


Did you know there is an engagement crisis on in America? It turns out that more than $500,000,000 in revenue are lost every year to the fact that employees are simply not engaged with their jobs (1). In fact, Gallup reports that 70% of American workers are either disconnected (emotionally) from their work or are actively seeking to hurt their company (1).


This crisis extends across America, and the worst part is it is insidious. The fact that employees often aren’t engaged leads to all sorts of missed opportunities, exaggerated costs, and unrealized revenues…and it does it so slowly. It happens in ways that are difficult to perceive. Gamification is one potential solution to the employee engagement crisis and it’s one that’s worked very well for me.  Let me share some stories and thoughts about gamification as I’ve applied it in healthcare.


A story


Once upon a time a section of Surgery was attempting to engage residents in a dramatic culture change. There were certain critically ill patients and administrative issues that were unrealized, and didn’t translate easily to the residents or daily work at the front lines of medicine.


There were different philosophies of care circulating in the department, and it was challenging to get through to a culture that had been in place for some time. The solution the team used? Gamification.


Deploying a comprehensive gamified system facilitated culture change and increased the rate of improvement for the section substantially. The resident (and attending) staff involved experienced increased job satisfaction as measured by a standardized tool.  Look here for more information.


Here is a little bit about gamification to let you know what the team did and how it succeeded.


Gamification is more than just points, badges and leader boards


You may already know that gamification is the use of game dynamics, techniques, and themes to improve staff engagement. One of the most commonly used techniques includes points, badges, and leader boards.


Points are awarded to participants for certain actions according to what the designers feel is important. Similarly, badges highlight special achievements and levels reached by the participants. These external signs of “leveling up” assist in providing social proof for culture change and reinforce aspects of compliance. A leader board uses peer benchmarking and peer motivation to help participants understand where they are relative to others in their group. Gamification, however, is much more than just these points, badges and leader boards (so called “PBLs”).


Although PBLs may be some of the external signs of gamification, there are other important techniques that can be utilized. One, for example, is often called the appointment dynamic. The appointment dynamic is the idea of giving positive reinforcement to the participant for returning to the same spot, location, or scheduled event at a certain time. (Works well to reinforce morning sign-out!) Rewarding this behavior with points can be an important dynamic to help culture change and improve such as improving the function of a healthcare department.


Another technique for gamification is unlocking new skills. When the participants “level up” this can be interpreted as achieving competency to a certain level within the system. This leveling up and unlocking new techniques can allow participants to unlock new abilities. It is a much more intriguing way to achieve competency-based training.


For example, in the story of the surgery department above, participants gained a new skill when they achieved certain point levels. For example, surgical residents gained the ability to examine and clear the cervical spine for injury in trauma patients. They achieved a certain level of points, took a brief test and interview, and were validated to clear cervical spines which is a very important skill in trauma and emergency surgery


Gamification uses powerful themes and motivators to engage


Did you know that the millennial generation is a larger bulge in the population plot than the baby boomers?  There are many more Millennials around than boomers, and it’s a fact that there are many more Millennials currently than other segments in the United States. These Millennials gravitate towards clear social interactions that can evolve from techniques like using game dynamics. As we mentioned above, gamification uses peer benchmarking, and indirectly positive peer pressure, to achieve excellent results.  It seems to resonate especially with generations other than the ones who are typically in admin positions nowadays.  In other words, the technique is for them not for us.


That can make it tough to understand for administrators, but no less effective for the staff who execute the organizational goals at the front lines.


Can be inexpensive to deploy


A gamification system does not need to cost tens of thousands of dollars to deploy at your hospital or business. Techniques like the gamification model canvas allow you to design a comprehensive game that can work well for your system. Look here.


Questions about gamification or how to deploy it at your center?  Send me an email at because I’m always happy to help.


Questions or comments about gamification particularly that applies to health care? You may wonder how staff react to the term gamification. You may wonder how we use leader boards and specifically how everyone reacts to having their name on a visible point tally. I’m happy to share these and other specifics for how we’ve successfully employed gamification in healthcare settings. Particularly, it’s good to share how gamification has improved job satisfaction among participants in statistically significant ways.



(1) Ed O’Boyle and Jim Harter, State of the American Workplace (Gallup, 2013),


4 Types of Bad Metrics Seen In Healthcare


By:  DM Kashmer MD MBA MBB FACS (@DavidKashmer)


Sometimes, you can see the train coming but can’t get out of the way fast enough.  Whack!  The train gets you despite your best efforts.  Wouldn’t have been great to start to get out of the way earlier?  In this entry, let’s focus on how to identify, as early as possible, four types of bad metrics in healthcare so that we can run away from that particular train as early as possible.  After all, the sooner we flee from these bad actors the more likely we are to avoid being run over by them.


Truth is, you’ve probably seen the train of bad metrics before.  After all, you know that all sorts of things are getting measured in our field nowadays and, sometimes, certain endpoints don’t feel particularly helpful and (in fact) seem to make things a lot worse.


First, a disclaimer:  this entry does not argue with metrics that the government mandates. There are some things that we measure because we have to for reimbursement or other reasons. However, if you believe (like me and other quality professionals) that a focus on reducing defects eventually impacts all sorts of quality measures (even mandated ones), then this is the entry for you!  This work does not focus on arguing or pushing back against those things that we must measure owing to regulation.  Now, on with the show…


Let’s explore four broad categories of bad metrics and how to avoid them.


#1 Metrics for which you cannot collect accurate or complete data.


It can be very challenging, in hospitals, to collect data. Often, data collection is frowned upon, or is even thought of as an afterthought or imposition.  So, as we launch in here, remember:  saying that you can’t collect complete or accurate data is not the same as actually being unable to.


Colleagues, listen:  if you think you can’t afford the time to collect good data, let me tell you that you can’t afford not to collect and use data.


When I’m working with a team that’s new to Lean or Six Sigma and we discuss data collection, the team often balks and focuses on the fact that no one is available to measure data, that we don’t have data collection resources or that, even if we had resources, we can’t get data.


I usually start with a quote:  “If you think it’s tough to get data, remember how tough it is to not get data.” (Split infinitive included for drama’s sake.)


Then we go on to explore together how there are several techniques we can use to make gathering data much easier so that we can avoid the “easy out” of “we can’t collect data about this and so it’s not a useful metric”.  In fact, most projects we do require data collection for 1-2 seconds per patient at most.  And that’s for prospective data collection.  (Want more info about how to make data collection easy, email me at and I’ll pass it along.)


However, in healthcare, we have all seen projects where data collection is arduous and so we react against data collection when we hear about it.


Sometimes, teams focus on using retrospective data. Of course, using retrospective data is much better than using no data. However, retrospective data has often been cleaned via editing or in some other way that makes it less valuable. Raw data that focuses on the specific operational definition of what you’re looking at tends to have the most value.


Sometimes, you have no way to measure a certain metric or concept and yet the team believes that concept to be very valuable. Take, for instance, a team that focused on scheduling patients for the operating room. The team felt that many patients were not prepared adequately before coming to the holding room. This included all sorts of ideas such as not having consent on the chart or some other issue. The team decided to measure this prospectively and found that only about one third of patients were completely prepared by the time they came to the pre-operative holding area. This was measured prospectively with a discrete data check sheet.


Let me explain that, sometimes, the fact that something hasn’t been measured previously means that the organization has not had that concept on its radar previously. This goes back to the old statement that if it is measured it will be managed and its corollary that if an endpoint is not measured, it is very hard to manage that endpoint.


To wrap this one up:  it is important to mention that one category of bad data or a bad metric is a metric that you cannot measure. However, it is important to realise that just because you haven’t measured it before doesn’t mean that you absolutely cannot measure it. Sometimes, if the idea or concept is important enough, you should develop a measure for it. We discuss how to develop a new end point in the entry here. That said, if it is absolutely impossible or arduous to collect accurate or complete data, the metric is much less likely to have value…but don’t just let yourself off the hook!  If you think something is important to measure, learn that there are ways to collect data that require only four or five seconds per patient!


#2 Metrics that are complex and difficult to explain to others.


If a metric gives a result that people can’t feel or conceptualize it’s just plain less valuable. Take, for example, a metric for OR readiness. In the month of April the operating room scored a very clear score on this metric. That score was “pumpkin”.


“Pumpkin?!”…Well, pumpkin doesn’t mean much to us in terms of operating room readiness. For that reason, you may want to measure your OR preparedness with a different metric than the pumpkin. Complex and difficult metrics that lack tangible meaning should be avoided.  Chose something that tells a story or evokes an emotion.  One upon a time, a center created (and validated) a “Hair On Fire Index” to indicate the level of emergent problems and crazy situations the operating room staff encountered in a day to indicate how stressed the OR staff was that day.  Wonder how they did it?  Look here.


#3 Metrics that complicate operations and create excessive overhead.


This type of metric is especially problematic. If a metric is difficult to measure and requires an incredible level of structure / workload to create it, it may not be useful.


Imagine, for example, a metric to predict sepsis that requires a twelve part scoring system, multiple regression, and the computing power of IBM’s Watson. This may not be a useful day to day metric for quality or outcome. Metrics that complicate operations and create excessive difficulty should be avoided.  When you see that type of metric coming, jump out of the way of the train.


#4 Metrics that cause employees to ‘make their numbers’.


This is similar to problem metric number two. When staff can’t feel the metrics that we describe, or see how they affect patient care, it can be very hard to mentally link what we do every day to our quality levels. That can lead to situations where employees are acting just to ‘make their numbers’. That type of focus is difficult and makes metrics less useful.


It’s important to have metrics that we perceive as having a tangible relationship to patients and their outcomes. We are so busy in healthcare that often if staff can fudge a metric, complete a form just to say it’s done, or in some other way ‘make numbers’, well, we often see that’s what happens. (That effect may not just be confined to healthcare of course!) It can be very challenging to create a metric that very clearly indicates what we have to do (and should be doing) rather than one that is sort of an abstract number we ‘have to hit’.


Take Aways, Or How To Avoid Being Hit By The Train Of Bad Metrics

In conclusion, there are at least four types of bad metrics and very clear ways to avoid them. Take a moment to try to see these trains coming from as far away in the distance as possible so that you can quickly get off the tracks unscathed.


We need metrics that we can feel and that tell a story of our patient care. We need ones that, whether government mandated or not, seem to relate to what we do everyday. We need ones that are easily gathered and tell the story of our performance clearly to both us as practitioners and staff who review us. Sometimes, we are mandated to collect certain end points yet, over time, I have come to find that when we do a good job with metrics that have meaning, we often have less defects and see better outcomes in all the metrics…whether we are mandated to collect a particular metric or not.


As part of your next quality project and how you participate in the healthcare system, take a minute to focus on whether the metrics you’re using are useful and, if not, how you can make them better.  Be the first to sound the alarm if you see the train of bad metrics on the track to derail meaningful improvement for our patients.

How You Measure The Surgical Checklist Determines What You Find

By:  DMKashmer MD MBA MBB FACS (@DavidKashmer)


Have you ever wondered how a measurement system affects your conclusions? There are several ways we’ve mentioned that the type of data you choose affects a great deal about your quality improvement project. In this entry, let’s talk more about how your setup for measuring a certain quality endpoint determines, in part, what you find…and more importantly, perhaps, how you respond.


The Type Of Data You Collect Affects What You Can Learn


Remember, previously, we discussed discrete versus continuous data. Discrete data, we mentioned, is data that is categorical, such as yes/no, go/stop, black/white, or red/yellow/green. This type of data has some advantages including that it can be rapid to collect. However, we also described that discrete data comes with several drawbacks.


First, discrete data often requires a much larger sample size to demonstrate significant change. Look here. Remember the simplified equation for discrete sample data size:




where p = the probability of some event, and delta is the smallest change you would like to be able to detect.


So, let’s pretend we wanted to detect a 10% (or greater) improvement in some feature of our program, which is currently performing at a rate of 40% of such-and-such. We would need sample size:  (0.40)(0.60)(2/0.10)^2, or 96 samples.


Continuous Data Require A Smaller Sample Size


Continuous data, by contrast, requires a much smaller sample size to show meaningful change. Look at the simplified continuous data sample size equation here:


(2 [standard deviation] / delta)^2


This is an important distinction between discrete and continuous data and, in part, can play a large role in what conclusions we draw from our quality improvement project.  Let’s investigate with an example.


A Cautionary Fairy Tale


Once upon a time there was a Department of Surgery that wanted to improve its usage of a surgical checklist. The team believed this would help keep patients safe in their surgical system. The team decided to use discrete data.


If a checklist was missing any element at all (and there were lots) it was called “not adequate”.  If it was complete from head to toe, 100%, then it would count as “adequate”. The team collected data on its current performance and found that only 40% of checklists were adequate . The teams target goal was 100%.


Using the discrete data formula, the team set up a sample that (at best) would allow them to detect only changes of 10% or larger. That was going to require a sample size of 96 per the simplified discrete data formula above.


The team made interesting changes to their system. For example, they made changes so that the surgeon would need to be present on check-in for the patient, and they made other changes to patient flow that they felt would result in improved checklist compliance.


Weeks later, the team recollected its data to discover how much things had improved. Experientially, the team saw many more checklists being utilized and there was significantly more participation. Much more of the checklist was being completed, per observations, each time.  The team felt that there was going to be significant improvement in the checklists and was excited to re-collect the data. Unfortunately, when the team used their numbers in statistical testing, there was no significant improvement in checklist utilization. Why was that?


This resulted because the team had utilized discrete data. Anything other than complete checklist utilization counted in the “not adequate” bin and so was counted against them. So, even if checklists were much more complete than they ever had been (and that seemed to be so), anything less than perfection would still count against the percentage of complete (“adequate”) checklists. Because they used discrete data in that way, they were unable to demonstrate significant improvement based on their numbers. They were disappointed and, more importantly, they had actually made great strides.


What options did the team have?  Why, they could have developed a continuous data endpoint on checklist completion.  How?  Look here.  This would have required a smaller sample size and may have shown meaningful improvement more easily.


A Take-Home Message


So remember:  discrete data can limit your ability to demonstrate meaningful change in several important ways. Continuous data, by contrast, can allow teams like the checklist team above to demonstrate significant improvement even if checklists are still not quite 100% complete. For your next quality improvement project, make sure you choose carefully whether you want discrete data endpoints or continuous data end points, and recognize how your choice can greatly impact your ability to draw meaningful conclusions as well as your chance of celebrating meaningful change.