If what stated in the article was entirely true, then most organisations, including Social Service Training Institute (SSTI) where I am working in, are simply wasting their time and resources in compiling Level 1 data from the reaction questionnaires and linking their operation decisions to training based on this data. SSTI conducted over 300 short courses in FY08/09 with more than 6000 training places attended. This translated into SSTI staff having to compile over 300 individual post-training level 1 reports of our associate trainers’ performance. If it takes an average duration of half an hour per report to be generated by the training coordinating staff and to be reviewed by our management across SSTI, then about 150 hours or about 16 working days are wasted on level 1 data a year. Moreover, SSTI uses Level 1 data to evaluate the performance of its associate trainer. It is unfair for trainers who are responsible, doing a conscientious job of facilitating training and learning, but may not be “entertaining” and “interesting” to the participants to be “punished” under such evaluation system.
In order to determine whether Level 1 data are effective or not to evaluate the trainer’s performance, we would need to determine what the core performance areas of a typical trainer exactly consist and the final outcome as a result of the training session. The core businesses of a trainer are simply to enhance the transfer of learning in terms of knowledge, develop the attitudes of the participants to be interested in the subject matter, as well as to inspire them to apply what they know, and finally to develop certain skills that are relevant to the work of the participants. The final outcome from an organisation’s perspective should be improved performance of the participants at the workplace after training. Level 1 evaluation does not provide objective information related to key performance areas of the trainer (which is actually pertaining to Level 2 evaluation) and final outcome of training (related to Level 3 evaluation). Those who rely solely on Level 1 information for trainer’s evaluation are actually based on the hypothesis that there is correlation between learning and positive training experience (created by trainer) encountered by the participants. While this hypothesis may be true to some extent, there are other factors, which may affect the participants’ inclination to learn and apply the knowledge at the training session. These factors include the following:-
- Difficulty of the subject being taught – The more difficult the subject, the less inclined the participants to learn and apply
- Perceived relevancy to work – The better the participants’ perception that subject is relevant to their work, the more inclined the participants to learn and apply
- Perceived management support – The greater the endorsement of management to support the post-course changes, the more inclined the participants to learn and apply
- Perceived benefits linked to the training – If there are tangible benefits, e.g. salary increase pegged to the attendance of the course, the more motivated the participants to learn and apply
These factors interplay dynamically and influence each participant to either be motivated to learn or not to learn.
Having said this, some organisations may justify their current practice of using Level 1 evaluation to gauge trainer’s performance by saying that it is an easy and convenient way of getting immediate response of the training session. Level 2 evaluation is simply too time-consuming and takes organisational resources to implement. Time is needed from the participants to take written tests; training staffs are required to conduct and oversee the written test process, as well as to compile the test results. All these work and resources just to evaluate the effectiveness of the training may not be cost-effective for every training session. They argued that even though Level 1 evaluation may not be totally accurate in evaluating training effectiveness, at least it still provides some data on participants’ reaction, which theoretically would still relate to motivation to learn by participant. One suggestion I have to get the best of both worlds is to take into consideration participant ratings for difficulty of the subject, relevancy of subject to their work, and participant’s perception of the learning. We can look at these 3 areas and hopefully obtained a more realistic assessment of the effectiveness of the training session.
Another suggestion that I’ve is to include a part in the smiley sheet for the participants to rate for themselves how much they learn and know about the subject being taught before and after the training session. In this way, the focus is not so much on the trainer, but on themselves as learners. The participants would have to ask and judge for themselves whether they have learned anything, regardless of whether they like or dislike the trainer. Of course, this suggestion is also not fool-proof as in the article, it was already mentioned that learners were overly optimistic about predicting how much they will remember.
Personally, I prefer the method suggested by the writer to “employ muiltiple mechanisms” in order to “put that smile back into smile sheets”. The concept of having a mini-focus group sessions and asking the participants how they would redesign or improve the training is an excellent way of reinforcing they have learned and to clarify what wasn’t clear to them. Moreover, having tried out whatever they had learned at the training session previously, they would experience the impact at work and what new issues faced, they can highlight to the trainer, who can better tweak the training programme to be more relevant. Last but not least, they would understand the constraints and limitation faced by the trainer, and they would be more realistic about their expectations on trainers and learning in future.
Finally to conclude, my personal take is that Level 1 evaluation is still important and provides the most immediate and easiest way of getting training evaluation data. We should use Level 1 data as a form of feedback to improve on future training sessions, ranging from training environment, learning methodology and training delivery. If we were to use the Level 1 data for purposes, other than as feedback for the training session, such as evaluation of trainers, etc, we must take into consideration the limitation, the accuracy and the other factors that affects the correlation between participant’s perception of the training session to the item that we wish to evaluate for. Of course, pragmatic consideration, e.g. the investment of cost, time, resources for the evaluation process must also be considered so that whatever analysis we do on the training evaluation, the outcome for the organisation will be one that is of the most cost-effective with reliable conclusion.