All training professionals understand that evaluating training is an important component of our work, particularly to ensure that we are doing our jobs well. In fact, evaluating the impact of learning is one of the ten core competencies of workplace learning and performance professionals by the Association for Talent Development (formerly ASTD). Measurement and evaluation are also important elements of the ANSI/IACET 1-2013 Standard (Category 4: Learning Event Planning; Category 5: Learning Outcomes, and Category 8: Assessment of Learning Outcomes), and constructing training programs that are aligned to the Standard ensures the incorporation of measurement and evaluation throughout.
Unfortunately, though, evaluation is sometimes an afterthought. While the A.D.D.I.E. framework (Analysis, Design, Development, Implementation, Evaluation) lists evaluation last, great training programs start with the evaluation in mind. The “A” (Analysis) in the A.D.D.I.E. framework is essentially an evaluation of need intended to clarify what our training programs should accomplish. In this phase, it is important for us to clearly answer these essential questions:
Equipped with the answers to these essential questions, we can design and develop a curriculum and an evaluation plan. My recommendation would be to determine the evaluation plan first and then construct the curriculum in a way that lends itself to achieving the desired results of the evaluation. So, for example, if the evaluation design you select for your training program incorporates a pre-test and post-test to measure comprehension (level 2 evaluation), the pre-test can be included in the curriculum when the training program is being developed.
Great training programs also include evaluations that go beyond learner reactions—level 1 in Kirkpatrick’s 4 levels of evaluation model. While level 1 evaluations are helpful in understanding learner’s reactions to the training program (including the instructor, materials, facilities, etc.), they should also incorporate questions around the learner’s intentions for making a change on the job. A question that I like to ask is, “As a result of completing this course, please list one change that you will make on the job in the next 30 days.” For some courses, I even have learners write the answer to this on the back of a self-addressed postcard that I mail them 30 days after the course to see if they have reached level 3—applying what they learned in the training program to their job.
Figure 1: Kirkpatrick's 4 Levels of Evaluation
Great training programs also involve evaluations well after the training event to determine if and how the training is being transferred back on the job. This level 3 evaluation might include interviewing the managers (or subordinates) of learners to find out if they have seen a change in the learner’s behavior, and it might include asking the learners if they carried through with their intention to apply what they learned on the job as they indicated in the course evaluation (moving from a level 1 to a level 3).
And, of course, great training programs evaluate real business results. All great training programs should address a real business strategy/challenge/problem and should be designed with an understanding of 1) how the training will address it and 2) to what degree. For example, a sales force training program offered to improve a firm’s customer relationship management could be measured by a change (increase) in customer satisfaction, customer retention, sales by repeat customers, customer referrals, and customer referral sales. To isolate the effects of the training program, the evaluation design might also include a control group to compare the metrics of training program participants against those who did not participate in the training program.
To prove the real economic value of training, a great training program should also include return on investment (ROI) calculations to demonstrate that the benefit or value derived from the training program is greater than its cost. So, for example, if the sales force training cost the firm $20,000 (including the instructional design, instructor fee, space, materials, learners’ wages while participating in training, etc.), and the training increased the firm’s sales by $400,000, the return on investment would be calculated as:
Benefit/Cost Ratio = Total $ Value of Benefits / Total Cost of Training Program
= $400,000 / $20,000
= 20:1 (or 200%)
So, for every dollar the firm invested in the training program, it resulted in $20 in sales.
Net Benefits = Total $ Value of Benefits - Total Cost of Training Program
= $400,000 - $20,000
= (Total $ Value of Benefits - Total Cost of Training Program) / Total Cost of Training Program
= $380,000 / $20,000
So, for every dollar the firm invested in the training program, it derived $19 in additional benefit.
Ultimately, for a training program to be a great training program, it should absolutely include a comprehensive evaluation plan. And evaluation should not be an after-thought. It should start with the evaluation in mind.
Kristopher Newbauer, EdM, MHRM, SPHR, CPLP, CPT is IACET’s Board President and is the Director of HR Operations/Learning & Organization Development at Rotary International, the oldest—and one of the largest—humanitarian service club organizations in the world. Mr. Newbauer holds a Masters of Education (EdM) in Global Human Resource Development from the University of Illinois at Urbana-Champaign and a Masters of Human Resource Management (MHRM). Newbauer currently serves on the adjunct faculty in the Department of Educational Leadership and Development, College of Education at Northeastern Illinois University, where he teaches a graduate-level course in measurement and evaluation. He previously served on the adjunct faculty in the graduate program of the Department of Education Policy, Organization and Leadership in the College of Education at the University of Illinois at Urbana-Champaign.