From the course: L&D Foundations: Essential Skills for L&D Leadership

Evaluating training effectiveness

- Well, if you want to keep your seat at the table as well as your budget and resources, you'll need to show results that matter. Unfortunately, many L&D leaders limit their evaluation to learner satisfaction surveys and quiz scores. They don't measure transfer of learning to performance on the job or the impact that training has on the organization's broader objectives. Yes, this is difficult to do, but ultimately you'll find that it's well worth the effort. Let's return to the story of my colleague who used a data-driven needs assessment to turn around a failing productivity software training program. After successfully implementing the new training program, she knew that evaluating results would be just as beneficial as was the initial needs assessment. Having worked in L&D for many years, she knew to begin with the end in mind. She planned upfront to conduct a thorough, four level Kirkpatrick evaluation to demonstrate the program's impact and value, and provide insights to guide continuous improvement. If you're unfamiliar with the Kirkpatrick model, it's a framework for systematically evaluating the effectiveness and impact of training programs across four levels. Level one, reaction, measures learner satisfaction with the training. Level two, learning, analyzes knowledge improvement. Level three, behavior, examines on the job application. And level four, results, evaluates the business impact of training on metrics like productivity. Her team used feedback surveys gathered right after the training to measure level one learner reaction to content format, experience, and relevance. They found that the bite-sized video tutorials and the support of the improved help chat bot were particularly well received, and showed that the new program was resonating. These data also enabled immediate qualitative insights for real time program improvements. When measuring level two learning, she knew that knowledge tests would be ineffective for measuring the learner's actual use of the software. So again, she collaborated early on with evaluation experts to design qualitative assessments using focus groups. These focus groups were conducted three months after launch and showed improved learner satisfaction and utility with the software. This indicated positive knowledge retention, but also identified knowledge gaps that still needed to be addressed. For level three behavior change, my colleagues' team worked closely with the technology analytics teams during the assessment phase to identify and track problems with adoption of the software. After six months, the number of support tickets dropped when measured against pre-training baselines indicating better adoption, but the improvements were not consistent across all areas, revealing the need to produce additional video tutorials. Demonstrating level four business results took time, persistence, and executive patience. But a year, measurements of productivity gains driven by improved adoption of the software made it clear that the new training had a real impact on results. My colleague was then able to show the company's leaders the value and ROI generated by the training program. As you can see, evaluation efforts such as my colleague conducted take time, expertise, and dedication to the cause, and a practice of beginning with the end in mind. As an L&D leader in your organization, it's essential that you know how to do this. Be sure to complete your skills gap analysis in the exercise files and bookmark the recommended LinkedIn courses. Then practice and hone your skills by taking the training evaluation scenario challenge that I've included in the exercise files.

Contents