What do you do with your end-of-course training surveys? What do they actually mean?
Who are you pulling data from to decide what the best practices are for the next wave of new trainees?
How are you analyzing the data you get for how people perform and how to emulate their successful behavior?
Do you realize that there is an inherent bias to how most people pull their data? It is called Survivorship Bias, and it is skewing how you develop your training.
Survivorship bias is research that only studies the successful side of performance. We need this information, but it isn’t telling the whole story. To illustrate, I’ll use the definitive story of survivorship bias.
During World War II, the Statistical Research Group (SRG) at Columbia University was tasked with studying damage to bombers returning from their runs. They were asked to determine where best to armor the planes so they could have a better survival rate. The research measured how often the planes could be hit and still return, then mapped where those hits occurred. At first, the idea was to armor the places the planes were hit the most, to improve their ability to survive. However, it was a Hungarian mathematician who had fled the Nazis in Austria that pointed out they were measuring where a plane could hit and survive, so they should armor the places that no hits are recorded, because those are most likely what took down the plane. They were researching the survivors, as it is nearly impossible to research the lost.
For more on survivorship bias, go here. For more on the math and story of Abraham Wald and the methods he used, go here. It is really fascinating, especially when you read about the equation (I am a math geek) and the assumptions behind it, which, if applied to how we develop training, is just as relevant.
We have survivorship bias in researching performance in training.. I often hear, and often from other IDs and analysts in business, that the company wants us to measure the behaviors and processes of the top-performers. But is that really the best demographic to measure?
A business isn’t typically supported entirely by the best performers. If you put all performance on a bell curve, they will be at least one standard deviation from the median of performance, which is where a business is sustained. We are measuring the outliers – those that may have an innate talent or a highly developed skill set after years of experience, most commonly both – and then expecting our new hire or newly promoted people to emulate that performance.
No wonder companies complain of a high attrition rate within the first 90 days.
We definitely want to study top-performers, but we also want to study the behaviors and processes of the median-performers and the low-performers. But there is more.
How many people study those that quit or were fired? Our top-performers may show best practices (I think we get best practices from the median-performers, honestly, the basics to do the job well), but we, in every case, are only measuring the people that are still in the company – the survivors.
We should measure those that failed because these are the gaps. These are the people that, for some reason, could not do the job or develop into their role.
When we base our evaluation of a training iteration on end-of-course surveys, we are only evaluating people who knew “little to nothing” to “now knowing something.” We are also only evaluating those that survived the course. The success of a course should not be evaluated on the end-of-course survey, it should be based on how many of those people are left after 90 days and what their performance looks like.
Then, measuring the performance of those that were terminated provides some insight into where the gaps are. Of course, when measuring the lost, just as with downed bombers, you have to make some basic and educated assumptions and accept there is probability in your measurement. However, by measuring the performance of terminated trainees, you get something that now arms your management decisions – predictive data.
You can start finding trends of behavior of those that left within certain lengths of tenure. People who left after training ended but before their 90-day mark will show different performance trends than someone that left before they hit their sixth month or even year mark. With this information, management can identify potential issues in performance before they have to write them up or put them on notice, even before they quit, and intervene by remediation, extra coaching, or other special attention.
Learning professionals understand that the cost of a new hire or promotion is more than just the salary of those in training. The cost savings and stability of a company can be greatly improved by identifying potential attrition with predictive analysis from researching and analyzing the performance of the lost. This also provides the opportunity to train to best practices and mitigate the gaps at the same time, while observing who is starting on a trend towards leaving.
Being aware of and mitigating survivorship bias will greatly improve training and be a force multiplier by increasing retention. Training should be evaluated in the space between success and failure, and at a time beyond the graduation point and end of course survey. We can keep armoring the parts that succeed or we can start armoring the parts of our training at the point where the learners start to crash.