Click beneath for the video version of the blog entry:
Click beneath for the audio version of the blog entry:
Using two quality improvement tools together can be more powerful than using one alone. One great example is the use of the fishbone diagram and multiple regression as a highly complementary combination. In this entry, let’s explore how these two tools, together, can give powerful insight and decision-making direction to your system.
You may have heard of a fishbone, or Ishikawa diagram, previously. This diagram highlights the multiple causes for special cause variation. From previous blog entries, recall that special cause variation may be loosely defined as variation above and beyond the normal variation seen in a system. These categories are often used in root cause analysis in hospitals. See Figure 1.
As you also may recall from previous discussions, note that there are six categories of special cause variation. These are sometimes called the “6 M’s’” or “5 M’s and one P”. They are Man, Materials, Machine, Method, Mother Nature and Management (the 6 Ms). We can replace the word “man” with the word “people” to obtain the 5Ms and one P version of the mneumonic device. In any event, the issue is that an Ishikawa diagram is a powerful tool for demonstrating the root cause of different defects.
Although fishbone diagrams are intuitively satisfying, they can also be very frustrating. For example, once a team has met and has created a fishbone diagram, well…now what? Other than opinion, there really is no data to demonstrate that what the team THINKS is associated with the defect / outcome variable is actually associated with that outcome. In other words, the Ishikawa represents the team’s opinions and intuitions. But is it actionable? In other words, can we take action based on the diagram and expect tangible improvements? Who knows. This is what’s challenging about fishbones: we feel good about them, yet can we ever regard them as more than just a team’s opinion about a system?
Using another tool alongside the fishbone makes for greater insight and more actionable data. We can more rigorously demonstrate that the outcome / variable / defect is directly and significantly related to those elements of the fishbone about which we have hypothesized with the group. For this reason, we typically advocate taking that fishbone diagram and utilizing it to frame a multiple regression. Here’s how.
We do this in several steps. First, we label each portion of the fishbone as “controllable” or “noise”. Said differently, we try to get a sense of which factors we have control over and which we don’t. For example, we cannot control the weather. If sunny weather is significantly related to number of patients on the trauma service, well, so it is and we can’t change it. Weather is not controllable by us. When we perform our multiple regression we do so with all factors identified labeled as controllable or not. Each is embodied in the multiple regression model. Then, depending on how well the model fits the data, we may decide to see what happens if the elements that are beyond our control are removed from the model such that only with the controllable elements are used. Let me explain in greater detail about this interesting technique.
Pretend we create the fishbone diagram in a meeting with stakeholders. This lets us know, intuitively, what factors are related to different measures. We sometimes talk about the fishbone as a hunt for Y=f(x) where Y is the outcome we’re considering and it represents a function of underlying x’s. The candidate underlying x’s (which may or may not be significantly associated with Y) are identified with the fishbone diagram. Next, we try to identify which fishbone elements are ones for which we have useful data already. We may have rigorous data from some source that we believe. Also, we may need to collect data on our system. Therefore, it bears saying that we take specific time to try to identify those x’s about which we have data. We then establish a data collection plan. Remember, all the data for the model should be over a similar time period. That is, we can’t have data from one time period and mix it with another time period to predict a Y value or outcome at some other time. In performing all this, we label the candidate x’s as controllable or noise (non-controllable).
Next, we seek to create a multiple regression model with Minitab or some other program. There are lots of ways to do this, and some of the specifics are ideas we routinely teach to Lean Six Sigma practitioners or clients. These include the use of dummy variables for data questions that are yes/no, such as “was it sunny or not?” (You can use 0 as no and 1 as yes in your model.) Next, we perform the regression and do try to input confounders if we think two x’s or more x’s are clearly related. (We will describe this more in a later blog entry on confounding.) Finally, when we review the multiple regression output, we look for an r^2 value of greater than 0.80. This indicates that 80% of the variability in our outcome data, or our Y, is explained by the x’s that are in the model. We prefer higher r^2 and r^2 adjusted values. R^2 adjusted is a more stringent test based on the specifics of your data and we like both r^2 and r^2 adjusted to be higher.
Next we look at the p values associated with each of the x’s to determine whether any of the x’s affect the Y in a statistically significant manner. As a final and interesting step we remove those factors that we cannot control and run the model again so as to determine what portion of the outcome is in our control. We ask the question “What portion of the variability in the outcome data is in our control per choices we can make?”
So, at the end of the day, the Ishikawa / fishbone diagram and the multiple regression are powerful tools that complement each other well.
Next, let me highlight an example of a multiple regression analysis, in combination with a fishbone, and its application to the real world of healthcare:
A trauma center had issues with a perceived excess time on “diversion’, or that time in which patients are not being accepted and so are diverted to other centers. The center had more than 200 hours of diversion over a brief time period. For that reason, the administration was floating multiple reasons why this was occurring. Clearly diversion could impact quality of care for injured patients (because they would need to travel further to reach another center) and could represent lost revenue.
Candidate reasons included an idea that the emergency room physicians (names changed in the figure beneath) were just not talented enough to avoid the situation. Other reasons included the weather, and still other reasons included lack of availability of regular hospital floor beds. The system was at a loss for where to start and it was challenging for everyone to be on the same page to have clarity with respect to where to start and what to do next with this complex issue.
For this reason, the trauma and acute care surgery team performed an Ishikawa diagram with relevant stakeholders and combined this with the technique of multiple regression to allow for sophisticated analysis and decision making. See Figure 2.
Variables utilized included the emergency room provider who was working when the diversion occurred (as they had been impugned previously), the day of the week, the weather, and the availability of intensive care unit beds to name just a sample of variables used. The final regression result gave an r^2 value less than than 0.80 and, interestingly, the only variable which reached significance was presence or absence of ICU beds. How do we interpret this? The variables included in the model explain less than 80% of the variation in the amount of time the hospital was in a state of diversion (“on divert”) for the month. However, we can say the the availability of ICU beds is significantly associated with whether the hospital was “on divert”. Less ICU beds was associated with increased time on divert. This gave the system a starting point to correct the issue.
Just as important was what the model did NOT show. The diversion issue was NOT associated significantly with the emergency room doctor. Again, as we’ve found before, data can help foster positive relationships. Here, it disabused the staff and the rest of administration of the idea that the emergency room providers were somehow responsible for (or associated with) the diversion issue.
The ICU was expanded in terms of available nursing staff which allowed more staffed beds and made the ICU more available to accept patients. Recruitment and retention of new nurses were linked directly to the diversion time for the hospital: the issue was staffed beds, and so the hospital realized that more nursing staff needed to be hired as one intervention. This lead to a recruitment and retention push, and, shortly thereafter, an increase in the number of staffed beds. The diversion challenge resolved immediately once the additional staff was available.
In conclusion, you can see how the fishbone diagram, when combined with the multiple regression, is a very powerful technique to determine which issues underly the seemingly complex choices we make on a daily basis. In the example above, a trauma center utilized these powerful techniques together to resolve a difficult problem. At the end of the day, consider utilizing an fishbone diagram in conjunction with a multiple regression to help make complex decisions in our data intensive world.
Thoughts, questions, or feedback regarding your use of multiple regression or fishbone diagram techniques? We would love to hear from you.
Click play to watch the video version of this blog entry:
Click the link beneath for an audio presentation of the entry:
There are some quality improvement projects that are so straightforward we see them repeated across the country. One of these straightforward projects is decreasing the amount of surgical instruments we have in our operative pans. The impetus to do this is that we only infrequently use the many clamps and devices that we routinely have sterilized for different procedures. This project, which we commonly refer to as “Leaning the pan”, is so useful and intuitive that it is repeated across the country. Here we take a second to describe how the project looks and some ways in which you might decide to apply it in your practice.
First, much of this Lean project focuses on the concept of value added time, or VAT. It turns out, in most systems, only approximately 1% of the time is spent adding value to whatever implement or service we are providing. It’s a striking statistic that we see repeated across systems. Again, only approximately 1% of our time is generally spent in things for which the customer will pay. As we have described before on the blog, here, one of the challenges we have in healthcare is establishing who the customer is. In part, the customer is the patient who receives the service. In another very real sense the customer is the third party payer who reimburses us for our procedures. The third party payer does not reimburse us any more or less if we use 20 Kelly clamps or 10 Kelly clamps to finish a procedure. Do we need 40 Kelly clamps in a pan? If we use the most expensive Gortex stitch, or the least expensive silk suture, our reimbursement does not vary. So, this concept of value added time is key in Leaning the pan.
We can demonstrate as we go through this quality improvement project that we are decreasing the amount of time that we spend doing things that do not add value to the case. In short, we can demonstrate that our proportion of value added time increases just as our proportion of non-value added time decreases. Again, notice that we have introduced this concept of value added time which focuses us squarely on the idea that in general, in most systems, we only spend approximately 1% of our time adding value to our output in systems. So, as we begin to set up the preconditions for this project, one of the ideas we can focus on is how much time a procedure takes. Here, the concept of operational definition becomes important. When does a procedure start and end? The procedure can start from the time the nurse opens the pan and spends time counting (along with the scrub nurse) the implements in the pan. Alternatively, we can focus on room turnover time and including the counting as part of that defined time. This is just one way to demonstrate a decreased time spent as non-value added time and it highlights the importance of definition. Less instruments to count translates into less length of time spent counting. We can also define the procedural time as the time from when the instruments are sterilized and repackaged. Again, as with all quality improvements projects, the operational definition of what time we are measuring and what we call procedural time is key.
Another useful idea in Leaning the pan is the Pareto diagram. As you probably remember, the Pareto Principle (or 80/20 rule) was originally developed by Italian economist Vilfredo Pareto. It demonstrated that approximately 80% of the effect seen is caused by 20% of the possible causes for that effect. In other words, there are a vital few which create the bulk of the effect in a system. This has been extrapolated to multiple other systems beyond the initial data Pareto utilized to describe this principle. Pareto was focused on wealth in Italy. However, it turns out the the 80/20 principle has been applied to many other systems and practices throughout the business and quality improvement world. In short, there is now a named diagram and lean six sigma tool called a Pareto diagram.
The Pareto diagram is a histogram that demonstrates frequency of use or occurrence of different items or implements in a system. See Figure 1. In general, we know that if we select 10 instruments and plot out how frequently they are used, we will find that only approximately 2 of the 10 instruments are responsible for over 80% of the usage of instruments in a procedure.
The example above highlights how two complaints of ten possibilities about food (overpriced and small portions) are responsible for around 80% of issues. Similarly, this tool may be used as a graphic way to demonstrate that the bulk of instruments in the operative pan are not used or are used rarely. There are several options here. First we could create a data collection plan to demonstrate how many times each instrument is used in the pan. Clearly this takes some data collection. Next we could demonstrate, as a Pareto diagram, instrument usage. Next we could say “Ok let’s remove from the pan the rarely used instruments or perhaps keep a few of the rarely used instruments that are particularly hard to find.”
In any case, we will discover that the bulk of the instruments that we sterilize every time are not vital for performance of the procedure and are, in fact, negligible. So, the Pareto diagram is a useful tool to demonstrate which instruments can (and should) be removed from the pan Again, this may take some data collection.
We have now demonstrated a straightforward way to demonstrate a change in value added time with our surgical instrument sterilization project and we have also demonstrated one of the key ways to highlight what instruments are used and what instruments can go. Next, let’s discuss some of the interesting solutions and consequences from leaning the pan projects across the country. First, we can usually establish consensus from a team of surgeons based on data from which tools and instruments are used. We can establish one pan which has data behind it that shows which instruments we all use as surgeons. This eliminates each doctor from requiring their own special pan. We can then take those hard to find instruments or things that individual surgeons feel are must-haves or must-have-available and put those in accessory packs for each surgeon. So, the basic laparotomy tray can be the same for everyone with its Lean, time-saving methodology. This saves time not just for one procedure but, over the total number of procedures, a surprising amount of time: if we performed 1000 exploratory laparotomies in a year and saved 5 minutes per laparotomy we have clearly save 5000 minutes of non-value added time over the course of the year. Some simple math demonstrates that this is hours of non-value added time eliminated from the procedure per year. Things like this are useful and key to establish the utility of these projects. Let’s look at some other keys.
One of the other keys to a successful project is a project charter. Before we even begin a Leaning the surgical pan project it is useful to have a stakeholder meeting with all the people involved in sterilizing trays and pans etc. This way, there can be a discussion about some of the things required by our system and the reason why things are the way they are now. It is important to get a sense of the reason why things are the way they are at the beginning of the quality improvement project. A project charter will include the scope of the project, the people involved, and an outline of the days required for the project to be completed. In a study of most Lean Sensei and Lean Six Sigma Black Belts, we discovered that one of the most frequently used tools in the body of knowledge is this project charter at the onset of this project. This is key in that it clearly focuses us on what is important for the project, timeline, stakeholders, and what the outcome measures will be. Again, for the Leaning the pan project we would recommend value added time as one of the key outcome measures.
Another key outcome measure should include something about cost. This helps with the business case for managing up the organization. Typically, in Lean and Six Sigma projects we use the cost of poor quality (COPQ) which we have described previously here. In this case, the cost of poor quality is somewhat more challenging to establish. Remember, the cost of poor quality is composed of four “buckets”. These include the cost of internal failures, external failures, surveillance and prevention. For more information on the COPQ and how it is calculated look here. In this case the COPQ is harder to demonstrate. What internal failures and external failures exist with this Leaning the pan model? We, instead of using a strict COPQ in this case, recommend demonstrating any cost savings based on the cost of instrument sterilization, the amount of time instruments can be sterilized before being replaced (life extension for instruments), and the savings that flow from the decreased amount of time utilized in counting a tray (ie more cases).
In short, it will be very challenging to demonstrate direct cost savings with this type of Leaning the pan project. We have seen around the country with this project that it is challenging to demonstrate firm cost savings on the income statement or balance sheet. However, this is a good starter Lean project and can really help the surgeons and operative team see the value in the Lean methodology. It can also help build consensus as an early, straightforward project in your Lean or Six Sigma journey.
In conclusion we have described the process of Leaning the operating room pan. As most Lean projects go, this one is relatively straightforward and includes the concept of value added time in addition to the Pareto diagram. It is more challenging to use other Lean tools such as value stream mapping and load leveling with a project like this. However, some standard Lean tools can greatly assist the practitioner in this nice warm-up project. The cost of poor quality is more challenging to establish and much of the case for the savings and decrease in waste from projects like this may come from the representation of how value added time increases as a proportion of time spent.
Discussion, thoughts, or personal reports of how you demonstrated cost savings or “leaned the pan” in your operating room? We would love to hear your comments and thoughts beneath.
Every so often on the blog we explore the case study of a recent startup and highlight the use of some of the advanced tools we discuss as part of the startup’s background.
Here, we consider a startup we’ll call Very Awesome Homes (VAH). VAH is a startup focused on rentals in the Greater Orlando, Florida area home rental market. This startup team used multiple lean startup techniques in order to create the new venture. First, there was an intentional focus on the team and its skills.
The team of three involved with VAH knew each other from previous business ventures. They are NOT related, and have complimentary skills required to make a go of a business in this manner. The skill set represented includes one team member with a focus on sales and services, another with an MBA (and startup) background and a third who is focused on logistics and infrastructure. This team of three, prior to founding and the other steps of a startup, first made sure that it was compatible, experienced, and that particular skills in the team were complimentary to achieve this particular endeavor.
Next, the team focused on a lean business model canvas. The value proposition was refined and focused on creating great experiences away from home at VAH’s rental properties.
Exactly what constituted a great experience? These and other important questions had to be answered as the team focused on a sustainable source of competitive advantage. It is a challenge to create sustainable competitive advantage in service and similar industries. In general, almost anything that can be done can be imitated and is not easily protected by intellectual property constructs like patents, etc.
Here, the team briefly scanned the rental market and realized that important factors included proximity to the local theme parks, ease of access for potential guests to arrive at the locations from the airport, and, importantly, a focus on those things necessary to enjoy oneself while on vacation. It was this latter portion that included how to enjoy oneself on vacation, that constituted some of the focus on the creation of competitive advantage by the VAH team. The team decided its minimum viable product would include positioning with some unusual features for rental properties: the team decided to include a 24 hour concierge service that was accessible for all guests at any of its properties so that people new to the area would be able to learn about some of the local features which can be very challenging to access otherwise. The concierge could make reservations for the guests for things like meals and events. The team also created an insider’s guide to the theme park area which focused on things that only locals typically know. This was to be included as part of the minimum viable product and, as discussed before, the team focused on a luxury / premium model with premium pricing. The team believed adding value with the concierge justified premium positioning, premium price, and differentiated their properties from others in the area.
The business model canvas also revealed other important features for the business, including channels through which the potential customer could access the rental property. The team decided to use one web outlet on which they had done previous research regarding positioning, site views, and rentability of similar homes. The team then signed up for this venue and worked on creating a posting. The team also focused on obtaining professional photos to give their first rental property a premium feel and focus. In addition to these elements of the business model canvas, the team immediately established an investment arm so that the revenue from the rentals could start to generate passive income for the company when the company reached that stage. This was done at the onset of the company so as to strengthen it from a financial position. These interesting choices were based on findings from the business model canvas.
Another important focus of the team was scalability. The team takes to heart the common definition of as startup as an experiment in finding sustainable, scalable revenue streams. The team really did look upon this as an experiment from which they could learn and establish important metrics based on their business model canvas. Next, the team completed all the important legal aspects including the relevant contracts, insurance, etc and went on to found Very Awesome Homes, LLC.
The lessons from this startup are clear, and, incidentally, VAH has gone on to success. The highlights include the fact that VAH has already found a positive outcome from this experiment in finding a sustainable revenue stream. The infrastructure of VAH is highly scaleable to more rentals whether they be properties owned by the company, the company founders, or other home owners in the area. Lessons learned include the importance of a focus on scalable tools early on, premium positioning, and business model canvas in addition to other elements from the lean toolset. The VAH team realizes the rental market may fluctuate with the economy and other issues, and so far their experiment has produced that scalable, reproducible revenue stream for which all startup / experiments hunt.
Questions or comments? We are always interested to hear more about your lean startup ideas and stories of business models in which you have participated.
Business model innovation requires high-quality decision making. Sometimes, it is very challenging to apply the often counter-intuitive techniques to everyday decisions. Here’s a personal story meant to highlight the application of a different way of thinking in an everyday context:
One of the interesting facts about explicit decision making techniques is they can be applied in everyday life, even though it seems difficult sometimes. Our last several blog entries have focused on EAST 2014 and its conference in Naples, Florida. Interestingly EAST 2014 had more than just lessons to teach from the podium during the conference. For example, on my way back from the conference, I flew home on Spirit Airlines. Spirit Airlines has a different restriction on bag sizes etc. and looks to charge you for different bag sizes. Spirit has some useful bag sizing techniques for you to size your bag appropriately prior to attempting to board the plane. If your bag is oversized they look to charge you for bringing the bag on the plane. $50 for smaller bags.
I had flown down to Naples on Spirit Airlines with the same bag with which I was flying back. Importantly, my bag had fit in the sizer at the airport on the way down. I knew that it would do so again as it was not overstuffed, it was actually smaller than it had been on the way down. In checked in online and went to a kiosk to print my boarding pass because the printer where I was staying did not work.
The woman looked at my bag and said “You will need to pay $50 for your bag.” I explained that the bag had fit in the sizer and had done so on the way down as well. This bag was to be considered my carry-on and could fly for free. The woman said to me that the bag could not extend above the sizer at all, which was not clear from the signage either at the Orlando Airport or at the airport from which I had flown. It fit in the sizer but was slightly taller than the cage style sizer they had available. It was approximately an inch or two. I explained nicely to the woman that the bag had fit on the way up and had fit in the overhead compartment on the jet, had fit in the sizer, and I had not been charged or even questioned on the way down. The woman then gave me a puzzle without realizing it: she said that the bag would be $50 there or $100 if there was an issue at the gate. I then considered the different ways in which the scenario could go.
Consider a decision tree, similar to the decision trees we have had on the blog before in other entries. This has become my habit over time and I wanted to use this tongue in cheek example to demonstrate how this can be useful in everyday life. The question was, at that time, do I pay $50 for the bag at this kiosk or do I take a chance and bring the bag to the gate where it may cost $100? I envisioned the situation as a decision tree. The expected payoff was listed and the rough probabilities to my estimation where included.
From my experience of flying with Spirit (no data available on this so I had to use probability estimates) and out of the MCO Airport let me know that the gate was sparsely manned, often overwrought with people, and that the staff were unlikely to give me any issues even if I tried to carry an elephant on the plane. So with the decision tree in mind the expected utility was best for me to thank the woman and tell her I would bring my bag with me. I thanked her and left. My expected utility from the branch “take your chances at the gate” was higher overall.
Sure enough, at the gate 30 minutes later, the situation was as expected: the gate was overwrought with passengers, and no one paid attention to many of the bags that were much larger than mine which were being brought onboard. And even though only zones 1 and eventually 2 were being called, passengers from zones 3 and 4 boarded right from the beginning and sort of crashed the gate en masse.
The lesson here: Decision making techniques like the decision tree can be useful in everyday life. I don’t tell you this story to tell you how I was right or claim victory. I tell you this because the woman at the kiosk had used some powerful techniques to cajole me into paying $50 for a bag which was unlikely to be a problem. She used such techniques as anticipated loss to attempt to get me to pay for a bag which was likely, in fact, to make it through the gate situation. Anticipated loss is a powerful psychological technique where the anticipation of loss in the future is used as a strong motivator to get us to do something now. The idea that it will be ‘worse in the future’ can be used as a motivator even when this is statistically unlikely.
From this and many other experiences I recommend a moment to think through different decisions with conditional probability. Yes I may have had to shell out $100 at the gate if we ran the scenario over and over again. In fact, if we had a computer run the scenario hundreds of times there would be some instances where I was out $100. However, based on the statistics involved and experience it was exceedingly unlikely that this was going to happen in this particular scenario. I invite you to use techniques like this in your everyday life for such things as purchasing insurance from Best Buy for new electronics etc. These techniques are especially useful when you are deciding whether to invest in a company, the market, or making decisions with your innovative business model.
Questions or comments? Please feel free to give your thoughts beneath. Do you have any instances where rigorous thinking helped you avoid unnecessary expense or issue? I would love to hear them.
You may have heard the term “gamification” (pronounced game-ification) previously. Gamification is the process of taking certain elements from the world of computer and board games and applying these toward motivational and customer retention strategies for different groups. Game dynamics may also be applied to other important functions for different companies. Importantly, gamification is a hot topic and is even being taught in some business schools. It is currently thought that gamification will resonate with the millennial generation (“millennials”) and subsequent generations to a greater degree than, for example, Generation X and the Baby Boom generation.
There are multiple important strategies in gamification that we could discuss in this blog post. Here, we focus on several important game dynamics as they were applied to a general surgery residency in 2012. Our group used game dynamics for our section of trauma, emergency surgery, and surgical critical care to assess their impact on resident motivation and perception of quality of learning. Here, we will discuss the dynamics we used and different outcomes. Interestingly, we also utilized game dynamics for our team of surgical attendings. We agreed to participate in a similar strategy so as to demonstrate our support for this new approach.
The set up included the creation of a consensus group of behaviors by the trauma and emergency surgeons that the team wished to reinforce in residents. Similarly, the resident staff created consensus behaviors they wished to see demonstrated by surgical attendings. Many behaviors were already present to varying degrees, and the consensus behaviors were not a list of all new behaviors–rather, they were ones that each group wished to reinforce or make more common. Each group assigned certain point values to the behaviors. The point assignment was arbitrary and was contingent on several factors including the relative scarcity of the event as well as the importance of the event to our trauma and emergency surgery section as a whole.
We then set up an email address and surgeons were able to email from their smartphone each time they caught the resident surgeon doing something correct. This is a very new concept in residency education: “catching someone in the act” of doing something correctly. Interestingly we really focused on catching the resident in the act of doing something good.
Next, residents were given a letter so as anonymize them. Each resident knew his or her letter only. The letters were drawn as part of a leaderboard which was displayed in the trauma and emergency surgery conference room. Therefore, at each morning report, residents could see their progress and point accumulation relative to the point accumulation of their anonymized colleagues. Certain threshold levels of points were set and were displayed on the leaderboard. That is, there were certain thresholds of points at which events took place. Some of these events include obtaining a new skill, such as the ability to clear a cervical spine. Residents would be educated in cervical spine clearance and the appropriate template cervical spine clearance note. They were then empowered to clear a cervical spine with the supervision of the trauma surgical attending. Other events would occur at different levels of achievement including a letter of support to the residency program director for the resident’s file and other important elements. Unbeknownst to the residents, the overall points leader at year’s end was given a special congratulations and a year end gift at the resident’s award dinner. This was the only time at which the resident point total was revealed, and each resident except the overall points leader was kept anonymous.
A survey was given to the residents prior to the institution of this motivational pathway. This was a validated survey which used a visual analogue scale so as to determine job satisfaction. This is called the Job Satisfaction Survey (JSS) and has been validated among emergency department physicians. All resident years participated in the system. All residents received the job satisfaction survey prior to and then after the year long process.
A statistically significant improvement was noted in a proportion of residents (two tailed p < 0.01 by Chi-squared test) who perceived the quality of their education to be excellent. These data were reported at a trauma and emergency surgery conference at Atlantic City, NJ.
This is one nice case study for how gamification is possible in surgical residency. This program leverages multiple dynamics including comparison for each individual to a peer group and a focus on positive reinforcements for appropriate behavior. Experientially, the trauma surgeons involved found this to be highly effective in improving resident behavior and reinforcing positive behavior on the service. Interestingly, some of the residents originally felt that the gamification may belittle resident eduction or somehow cheapen it. Instead, by the end of the process, these residents were reporting positive results and were receptive to the process. Again, this type of education is a far cry from typical resident education. The terminology of ‘gamification’ was felt to decrease buy in as some residents felt, as mentioned, initial feedback from a minority of residents included an idea that turning their residency into a game was not appropriate. However, once residents saw that this was merely an assessment tool that focused on positive behavior and reinforced it while maintaining anonymity, they became much more receptive etc. There was no ability to remove points from any participant in the system at any time, and, again, focus was placed on what the residents did appropriately according to the defined behaviors.
Experientially, from this, our team learned that gamification is achievable in the inpatient medical education world. We also learned that this innovative process was a true boon for the human resources portion of our section. It changed the dynamic and interaction between the surgeons and the residents for the better. Also, experientially, we learned that performance seemed to greatly improve. There was a constancy of expectation of the residents by the trauma and emergency surgeon and vice versa.
As mentioned, the trauma surgeons also participated in the system. Trauma surgeons were also anonymized and had a leaderboard. Each trauma surgeon was given a letter and each trauma surgeon only knew his or her letter. Point totals were accumulated as emails were sent from the residents to a third party who was not one of the surgeons who attended. These were then reflected on the points on the leaderboard. Many of the dynamics have names in the gamification world. For example, the system described above incorporates some of the most basic game dynamics including what are called PBL’s. PBL stands for points, badges and leaderboard. Points are self explanatory as is the leaderboard. These leverage positive peer pressure and reinforcement so as to increase performance. The badges are those things that are achieved to signify an improvement in level. Although we did not give true badges to be put on the physicians coat etc, stickers and other cues maybe options for other programs.
We did use another dynamic, that of “leveling up”, to allow this gamification process to recognize good performance and to increase point accumulation by allowing residents to obtain new skills independent of their year level. Although senior residents had certain skills grandfathered in based on year level, such as PGY-5’s ability to clear c-spines with supervision of an attending surgeon, younger residents were able to attain these and other skills via achieving a certain point total that demonstrated competency in the performance of similar tasks. This process makes for a competency-based method of advancement and attainment of new skills.
Surgeons and residents were greatly satisfied with this innovative system and it is one you may apply in your educational or motivational process. Consider applying the process to your team of surgeons, residents, or Advanced Practitioners.
Questions or comments? As always we invite your thoughts.