Download presentation
1
© 2013 by Nelson Education Ltd.
CHAPTER ELEVEN Training Evaluation © 2013 by Nelson Education Ltd.
2
© 2013 by Nelson Education Ltd.
LEARNING OUTCOMES Define training evaluation and the main reasons for conducting evaluations Discuss the barriers to evaluation and the factors that affect whether or not an evaluation is conducted Describe the different types of evaluations Describe the models of training evaluation and the relationship among them © 2013 by Nelson Education Ltd.
3
© 2013 by Nelson Education Ltd.
LEARNING OUTCOMES Describe the main variables to measure in a training evaluation and how they are measured Discuss the different types of designs for training evaluation as well as their requirements, limits, and when they should be used © 2013 by Nelson Education Ltd.
4
INSTRUCTIONAL SYSTEMS DESIGN MODEL
© 2013 by Nelson Education Ltd.
5
INSTRUCTIONAL SYSTEMS DESIGN MODEL
Training evaluation is the third step of the ISD model and consists of two parts: The evaluation criteria (what is being measured) Evaluation design (how it will be measured) These concepts are covered in the next two chapters Each has a specific and important role to play in the effective evaluation of training and the completion of the ISD model © 2013 by Nelson Education Ltd.
6
© 2013 by Nelson Education Ltd.
TRAINING EVALUATION Process to assess the value – the worthiness – of training programs to employees and to organizations © 2013 by Nelson Education Ltd.
7
© 2013 by Nelson Education Ltd.
TRAINING EVALUATION Not a single procedure; a continuum of techniques, methods, and measures Ranges from simple to elaborate procedures The more elaborate the procedure, the more complete the results, yet usually the more costly (time, resources) Need to select the procedure based on what makes sense and what can add value within resources available © 2013 by Nelson Education Ltd.
8
WHY A TRAINING EVALUATION?
Improve managerial responsibility toward training Assist managers in identifying what, and who, should be trained Determine cost–benefits of a program Determine if training program has achieved expected results Diagnose strengths and weaknesses of a program and pinpoint needed improvements Justify and reinforce the value of training © 2013 by Nelson Education Ltd.
9
© 2013 by Nelson Education Ltd.
DO WE EVALUATE? There has been a steady decline in determining ROI – Level 3 and 4 evaluation © 2013 by Nelson Education Ltd.
10
BARRIERS TO EVALUATION
Barriers fall into two categories: Pragmatic Requires specialized knowledge and can be intimidating Data collection can be costly and time consuming Political Potential to reveal ineffectiveness of training © 2013 by Nelson Education Ltd.
11
© 2013 by Nelson Education Ltd.
TYPES OF EVALUATION Evaluations may be distinguished from each other with respect to: The data gathered and analyzed The fundamental purpose of the evaluation © 2013 by Nelson Education Ltd.
12
© 2013 by Nelson Education Ltd.
TYPES OF EVALUATION The data gathered and analyzed Trainee perceptions, learning, and behaviour at the conclusion of training Assessing psychological forces that operate during training Information about the work environment Transfer climate and learning culture © 2013 by Nelson Education Ltd.
13
© 2013 by Nelson Education Ltd.
TYPES OF EVALUATION 2. The purpose of the evaluation Formative: Provide data about various aspects of a training program Summative: Provide data about worthiness or effectiveness of a training program Descriptive: Provide information that describes the trainee once they have completed a training program Causal: Provide information to determine if training caused the post-training behaviours © 2013 by Nelson Education Ltd.
14
© 2013 by Nelson Education Ltd.
MODELS OF EVALUATION Kirkpatrick’s Hierarchical Model Oldest, best known, and most frequently used model The Four Levels of Training Evaluation: Level 1: Reactions Level 2: Learning Level 3: Behaviours Level 4: Results ROI © 2013 by Nelson Education Ltd.
15
CRITIQUE OF EVALUATION
There is general agreement that the five levels are important outcomes to be assessed There are some critiques: Doubt about the validity Insufficiently diagnostic Kirkpatrick requires all training evaluations to rely on the same variables and outcome measures © 2013 by Nelson Education Ltd.
16
© 2013 by Nelson Education Ltd.
MODELS OF EVALUATION COMA Model A training evaluation model that involves the measurement of four types of variables Cognitive Organizational Environment Motivation Attitudes © 2013 by Nelson Education Ltd.
17
© 2013 by Nelson Education Ltd.
MODELS OF EVALUATION The COMA model improves on Kirkpatrick’s model in four ways: Transforms the typical reaction by incorporating greater number of measures Useful for formative evaluations The measures are known to be causally related to training success Defines new variables with greater precision Note: Relatively new model – too early to draw conclusions as to its value © 2013 by Nelson Education Ltd.
18
© 2013 by Nelson Education Ltd.
MODELS OF EVALUATION C. Decision-Based Evaluation Model A training evaluation model that specifies the target, focus, and methods of evaluation © 2013 by Nelson Education Ltd.
19
© 2013 by Nelson Education Ltd.
MODELS FOR TRANSFER Decision-Based Evaluation Model Goes further than either of the two preceding models: Identifies the target of the evaluation Trainee change, organization payoff, program improvement Identifies its focus (variables measured) Suggest methods General to any evaluation goals Flexibility: Guided by target of evaluation © 2013 by Nelson Education Ltd.
20
MODELS FOR TRANSFER As with COMA, the DBE model is recent and will need to be tested more fully All three models require specialized knowledge to complete the evaluation; this can limit their use in organizations without this knowledge Holton and colleagues’ Learning Transfer System Inventory (seen in Chapter 10) provides a more generic approach See Training Today 11.2 for more on its use for evaluation © 2013 by Nelson Education Ltd.
21
© 2013 by Nelson Education Ltd.
MODELS FOR TRANSFER Training evaluation requires data be collected on important aspects of training Some of these variables have been identified in the three models of evaluation A more complete list of variables is presented in Table 11.1, and Table 11.2 shows sample questions and formats for measuring each type of variable © 2013 by Nelson Education Ltd.
22
© 2013 by Nelson Education Ltd.
EVALUATION VARIABLES Reactions Learning Behaviour Motivation Self-efficacy Perceived/anticipated support Organizational perceptions Organizational results See Table 11.2 in text © 2013 by Nelson Education Ltd.
23
© 2013 by Nelson Education Ltd.
VARIABLES Reactions 1. Affective reactions: Measures that assess trainees’ likes and dislikes of a training program 2. Utility reactions: Measures that assess the perceived usefulness of a training program © 2013 by Nelson Education Ltd.
24
© 2013 by Nelson Education Ltd.
VARIABLES Learning Learning outcomes can be measured by: Declarative learning: Refers to the acquisition of facts and information, and is by far the most frequently assessed learning measure Procedural learning: Refers to the organization of facts and information into a smooth behavioural sequence © 2013 by Nelson Education Ltd.
25
© 2013 by Nelson Education Ltd.
VARIABLES C. Behaviour Behaviours can be measured using three approaches: Self-reports Observations Production indicators © 2013 by Nelson Education Ltd.
26
© 2013 by Nelson Education Ltd.
VARIABLES Motivation Two types of motivation in the training context: Motivation to learn Motivation to apply the skill on-the-job (transfer) Self-Efficacy Beliefs that trainees have about their ability to perform the behaviours that were taught in a training program © 2013 by Nelson Education Ltd.
27
© 2013 by Nelson Education Ltd.
VARIABLES Perceived and/or Anticipated Support Two important measures of support: Perceived support: The degree to which the trainee reports receiving support in attempts to transfer the learned skills 2. Anticipated support: The degree to which the trainee expects to supported in attempts to transfer the learned skills © 2013 by Nelson Education Ltd.
28
© 2013 by Nelson Education Ltd.
VARIABLES G. Organizational Perceptions Two scales designed to measure perceptions: Transfer climate: Can be assessed via a questionnaire that identifies eight sets of “cues” Continuous learning culture: Can be assessed via questionnaire presented in Trainer’s Notebook 4.1 in Chapter 4 of the text © 2013 by Nelson Education Ltd.
29
© 2013 by Nelson Education Ltd.
VARIABLES G. Organizational Perceptions (cont'd) Transfer climate cures include: Goal cues Social cues Task and structural cues Positive feedback Negative feedback Punishment No feedback Self-control © 2013 by Nelson Education Ltd.
30
© 2013 by Nelson Education Ltd.
VARIABLES Organizational Results Results information includes: 1. Hard data: Results measured objectively (e.g., number of items sold) 2. Soft data: Results assessed through perceptions and judgments (e.g., attitudes) 3. Return on expectations: Measurement of a training program’s ability to meet managerial expectations © 2013 by Nelson Education Ltd.
31
DESIGNS IN TRAINING EVALUATION
The manner with which the data collection is organized and how the data will be analyzed All data collection designs compare the trained person to something © 2013 by Nelson Education Ltd.
32
DESIGNS IN TRAINING EVALUATION
1. Non-experimental designs: Comparison is made to a standard and not to another group of (untrained) people 2. Experimental designs: Trained group compared to another group that does not receive the training – assignment is random 3. Quasi-experimental designs: Trained group is compared to another group that does not receive the training; assignment is not random © 2013 by Nelson Education Ltd.
33
DATA COLLECTION DESIGN
© 2013 by Nelson Education Ltd.
34
DATA COLLECTION DESIGN
Pre Post Pre Post A: Single group post-only design (Non-experimental) B: Single group pre-post Design (Non-experimental) © 2013 by Nelson Education Ltd.
35
DATA COLLECTION DESIGN
Trained Untrained Pre Post Pre Post C: Time series design (Non-experimental) D: Single group design with control group © 2013 by Nelson Education Ltd.
36
DATA COLLECTION DESIGN
Trained Untrained Pre Post Pre Post E: Pre-post design with control group F: Time series design with control group © 2013 by Nelson Education Ltd.
37
DATA COLLECTION DESIGN
Training on Relevant Items Training on Irrelevant Items Pre Post G: Internal Referencing Strategy © 2013 by Nelson Education Ltd.
38
© 2013 by Nelson Education Ltd.
SUMMARY Discussed the main purposes for evaluating training programs as well as the barriers Presented, critiqued, and contrasted three models of training (Kirkpatrick, COMA, and DBE) Recognized that Kirkpatrick model is most frequently used, yet has limitations Discussed the variables required for an evaluation as well as methods and techniques required to measure them Presented the main types of data collections designs Discussed factors influencing choice of data collection designs © 2013 by Nelson Education Ltd.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.