At the National Treasury Employees Union’s legislative conference last week, Katherine Archuleta, director of the Office of Personnel Management said, “The President’s budget proposal will include measures to improve federal employee training and support an exchange of training ideas across government, part of the conversation that [NTEU President] Colleen [Kelley] and other labor representatives are going to be having in the Labor Management Council. We need to learn from one another about what works. We need to be able to talk about our successes.”
For far too long federal agencies have looked to the training budget as one of the first places to cut (after travel) when budgets are tight. Training cuts are among the most shortsighted of the budget cutting options. They trade small savings today for a lack of capability tomorrow. Although such cuts are typically justified by claims that they are to protect dollars devoted to the mission, the result is that employees do not have current training on crucial mission skills. The renewed emphasis on training in the 2015 budget is a good sign that the dark times for employee training may be coming to an end. I was also pleased to see Director Archuleta’s focus on sharing information regarding learning about what works.”What works” is sometimes difficult to define other than through anecdotal evidence. One key shortcoming in many training programs is evaluation of their effectiveness. Real training evaluation goes far beyond simply asking class attendees if they liked the training or asking managers if they think their employees did better after training. Done properly, training evaluation can help agencies determine whether employee skills, customer experiences and mission outcomes are improved by specific training. If more training programs were accompanied by proper evaluations, we would learn far more about “what works” and what does not. That would lead to far better use of training dollars and better outcomes.
One area where we have a good idea of “what works” is leader development. Other than core mission skills, leader development is one of the best investments in training dollars. Money spent on leader development is leveraged by the effects leaders have on the people they lead. With many of the bad results evidenced by the Federal Employee Viewpoint Survey being directly or indirectly caused by the quality of supervision, the amount of goodness that can result from effective leader development programs is tremendous. Given that, and the fact that even with more dollars training budgets will be tight, how can agencies make certain they spend their dollars wisely?
What are the best examples of organizations that are able to measure the impact of leadership development programs?
What are the specific techniques and the required context that these organizations need to link leadership development content to organizational metrics?
Can these best practices be transplanted into other organizations, thereby allowing them to assess and improve the outcomes of their leadership development programs?
Following interviews with an expert panel (including Chief Learning Officers), conducting a literature review, doing “best case” interviews and administering two surveys, the research resulted in several key findings.
It is possible to effectively (and efficiently) link the outcomes of leadership development to organizational success measures. It is exceedingly rare to find organizations that do it well, and it definitely takes some practice to do it well.
Best Case organizations who do evaluation well, report it taking far fewer resources (time, people, money) than those who speculate about the difficulty of implementing a robust evaluation system.
Organizations that already have a culture around measurement (i.e., measuring the effectiveness of programs, service, success, etc.) have a much easier time standing up a training evaluation system for leadership development.
There are a lot of techniques out there (some more qualitative in nature, some more quantitative) that can work for different organizations, but you have to select ones that fit your organization. The study identified 29 techniques.
Best Case organizations have baseline data, a formal evaluation plan in place, and use more advanced types of measurement approaches such as control groups and time-series approaches.
ROI measurement is still very low in the profession.
The good news from the report and literature review is that it is clear that training programs in general can benefit from evaluations of their effectiveness. Training evaluation is not only less resource intensive than many people believe, but there are also many effective techniques that can be used and tailored to fit the culture, mission and requirements of the organization. If agencies become more committed to training evaluation, the federal government will gain far more knowledge about effectiveness of programs, ways to get better results with fewer dollars and the real mission benefits of training. That knowledge can serve as the basis for the business case for increased investment in training and the improved results it can produce.
Jeff Neal is founder of the blog, ChiefHRO.com, and a senior vice president for ICF International, where he leads the Organizational Research, Learning and Performance practice. Before coming to ICF, Neal was the chief human capital officer at the Department of Homeland Security and the chief human resources officer at the Defense Logistics Agency.