Developing Training Field Manual

The world is changing and so is the world of learning and development. This 'wiki' is intended to be a ‘field-book’ or ‘manual’ for trainers who are supporting changing organisations into the future. This document is not a how-to manual although such insights may be in evidence. Rather, it provides the next generation with ideas and approaches to their own practice drawn from the inquiry of others .

It is a participative space where ICTI members can chip in with their own perspective, suggest areas and topics to be added or correct errors and update with more recent ideas and practice.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Go to the Wiki Main Page

Impact

Training impact is often poorly measured if at all.

The influence of Kirkpatrick's (1959) four levels of evaluation seems to drive most trainers to the easiest and possibly least helpful of levels - the reaction of the learners to what they have received. Training events invariably end with the 'happy sheet' which tells us that the course was enjoyed but it doesn't tell much about its impact.

Kirkpatrick's fourth level, by contrast is what we need to know most about and which we rarely see much effort to understand. How did that training affect the aims and objectives of the organisation? Does radio X make better programmes as a result? Has the quality of the journalism improved at TV Y since the course? Are more listeners or viewers tuning in? Has the quality of the magazine improved? Are the readers more passionate and connected?

For this reason evaluating sometime after a course is possibly the most powerful. It allows the learners, trainers and managers to identify whether more training is required, whether the course needs to be amended and whether it should be offered on a regular cycle.

The initiative for evaluating again business objectives falls to the trainer. Managers are likely to look at the bottom line - did we get a return for the investment we made? But trainers can encourage a look at the wider issues.

Dyer (1994) suggested a useful way of designing courses to ensure that they would meet the business objectives. He turned Kirkpatrick's levels upside down to demonstrate that design should start with the business objectives. These force the trainer to ask what needs to change in the way the organisation and its staff work to achieve those objectives. This in turn highlights the learning needs. Only then can we ask the question, how will this learning best be delivered and received.

Dyer's approach to planning using Kirkpatrick

Measuring impact of training demands time and energy which needs to be set aside. It may mean declining other training assignments in order to ensure adequate time for the level four evaluation to be undertaken. It certainly means allocating sufficient resources in the budget and diary.

Funders of training often demand an evaluation of the impact of the intervention. If nothing else it allows them to be sure that they made a good investment. It is not unknown for such measures to be focused upon units of service such as the numbers of participants attending a course. This is set up in advance in a project design by defining a measure of success in the number of participants. This allows the trainer to say that she achieved the number of participants anticipated but following Kirkpatrick, the numbers don't demonstrate that the organisation has been better equipped as a result of the training.

Impact should also take a longer term view (Zinoviev and Rotem, 2008) and may include unintended results.

Zinoviev, M and Rotem A (2008) Review and Analysis of Training Impact Evaluation Methods, and Proposed Measures to Support a United Nations System Fellowships Evaluation Framework. Geneva, World Health Organisation

Go to:
Main Introduction page

You are welcome to edit this page and contribute to the development of this article.
For more information you can review contributing to the wiki