Evaluating Training Programs. How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training.

Slides:



Advertisements
Similar presentations
YOU CANT RECYCLE WASTED TIME Victoria Hinkson. EXPERIMENT #1 :
Advertisements

Training. Training & Development Definition “The systematic acquisition of attitudes, concepts, knowledge, roles, or skills, that result in improved performance.
Training Evaluation Presentation by Ranjith Menon.
Evaluation Procedures
Reasons for Evaluating Training Companies are investing millions of dollars in training programs to help gain a competitive advantage. To justify the costs.
Advances in Human Resource Development and Management
Copyright © Questionmark Corporation and/or Questionmark Computing Limited, known collectively as Questionmark. All rights reserved. Questionmark.
Evaluation of Training
Managing Learning and Knowledge Capital Human Resource Development: Chapter 11 Evaluation Copyright © 2010 Tilde University Press.
Selection of Research Participants: Sampling Procedures
Chapter 3 Producing Data 1. During most of this semester we go about statistics as if we already have data to work with. This is okay, but a little misleading.
Chapter 4 Job Analysis Discuss the nature of job analysis, including what it is and how it’s used. Use at least three methods of collecting job analysis.
Doing Social Psychology Research
Evaluation of Training
6 Chapter Training Evaluation.
CHAPTER 8 MANAGING EMPLOYEES’ PERFORMANCE
Chapter 6 Training Evaluation
Educational Assessment
Experimental Research
Quantitative Research
Chapter 3 Needs Assessment
1.
RESEARCH DESIGN.
Experimental Design The Gold Standard?.
Performance Appraisal
Evaluation of Training
Revising instructional materials
Human Resource Management Gaining a Competitive Advantage
EFFECTIVENESS OF TRAINING Group 5. Effectiveness of Training  What is “effectiveness of training”? Effectiveness means producing an intended result.
Chapter 6 Training Evaluation. Chapter 6 Training Evaluation Concepts Training Evaluation: The process of collecting data regarding outcomes needed to.
5 Chapter Training Evaluation.
Chapter 5 Job Analysis.
Training Evaluation. Evaluation of Training Since huge sums of money are spent on training and development, how far the programme has been useful must.
Data Collection Methods
Kirkpatrick’s Levels of Evaluation
Copyright ©2008 by Pearson Education, Inc. Pearson Prentice Hall Upper Saddle River, NJ Foundations of Nursing Research, 5e By Rose Marie Nieswiadomy.
8-1 McGraw-Hill/IrwinCopyright © 2011 by The McGraw-Hill Companies, Inc. All Rights Reserved. fundamentals of Human Resource Management 4 th edition by.
Evaluating HRD Programs
CHAPTER 6 Employee Training and Development
1 Experimental Research Cause + Effect Manipulation Control.
Training Evaluation- Design Issues Basic Designs –Post-test only –Pre-rest/Post-test Complex Designs –Post-test with control group –Pre-test/Post-test.
EVALUATION OF HRD PROGRAMS Jayendra Rimal. The Purpose of HRD Evaluation HRD Evaluation – the systematic collection of descriptive and judgmental information.
Training Evaluation.
Chapter 6 Training Evaluation
PERFORMANCE APPRAISAL. Performance is a systematic evaluation of the individual with respect to his performance on the job and his potential for development.
Research Design. Selecting the Appropriate Research Design A research design is basically a plan or strategy for conducting one’s research. It serves.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
DAY-5 Evaluation and Control Process 6 Yes Determine what to measure. Measure performance. Take corrective action. STOP No Does perform- ance match.
Chapter 11.  The general plan for carrying out a study where the independent variable is changed  Determines the internal validity  Should provide.
HRM-755 PERFORMANCE MANAGEMENT OSMAN BIN SAIF LECTURE: TWENTY SEVEN 1.
Experimental Control Definition Is a predictable change in behavior (dependent variable) that can be reliably produced by the systematic manipulation.
The Kirkpatrick Model organizational change Richard Lambert, Ph.D.
Advances in Human Resource Development and Management Course Code: MGT 712 Lecture 29.
Training evaluation- Ten steps process 1.Determining needs 2.Settings objectives 3.Determining subject content 4.Selecting participants 5.Determining the.
Employee Development Human Resource Management. Employee Training: Trends n Four economic and demographic trends u Unskilled and undereducated youth u.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
QUANTITATIVE METHODS I203 Social and Organizational Issues of Information For Fun and Profit.
Training Evaluation Chapter 6
Training Evaluation Chapter 6 6 th Edition Raymond A. Noe Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Evaluating Training The Kirkpatrick Model.
Russell & Jamieson chapter Evaluation Steps 15. Evaluation Steps Step 1: Preparing an Evaluation Proposal Step 2: Designing the Study Step 3: Selecting.
Chapter 11 Managing Human Resource Systems
Chapter Six Training Evaluation.
Chapter Eight: Quantitative Methods
2 independent Groups Graziano & Raulin (1997).
SPIRIT OF HR.in TRAINING EVALUATION.
RESEARCH METHODOLOGY ON ENVIRONMENTAL HEALTH PRACTICE IN WEST AFRICA
Assessments and the Kirkpatrick Model
Training Evaluation Chapter 6
6 Chapter Training Evaluation.
Presentation transcript:

Evaluating Training Programs

How can training programs be evaluated? Measures used in evaluating training programs Measures used in evaluating training programs Various ways of designing the evaluation procedures Various ways of designing the evaluation procedures Describe the measurement process itself Describe the measurement process itself

Donald Kirkpatrick Kirkpatrick developed a model of training evaluation in 1959 Kirkpatrick developed a model of training evaluation in 1959 Arguably the most widely used approach Arguably the most widely used approach Simple, Flexible and Complete Simple, Flexible and Complete 4-level model 4-level model

Measures of Training Effectiveness REACTION - how well trainees like a particular training program. Evaluating in terms of reaction is the same as measuring trainees' feelings. It doesn't measure any learning that takes place. And because reaction is easy to measure, nearly all training directors do it. REACTION - how well trainees like a particular training program. Evaluating in terms of reaction is the same as measuring trainees' feelings. It doesn't measure any learning that takes place. And because reaction is easy to measure, nearly all training directors do it.

Reaction (cont) It's important to measure participants' reactions in an organized fashion using written comment sheets that have been designed to obtain the desired reactions. It's important to measure participants' reactions in an organized fashion using written comment sheets that have been designed to obtain the desired reactions. The comments should also be designed so that they can be tabulated and quantified. The comments should also be designed so that they can be tabulated and quantified. The training coordinator/trained observer should make his own appraisal of the training to supplement participants' reactions. The training coordinator/trained observer should make his own appraisal of the training to supplement participants' reactions. The combination of two evaluations is more meaningful than either one by itself. The combination of two evaluations is more meaningful than either one by itself.

Reaction (cont) When training directors effectively measure participants' reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. When training directors effectively measure participants' reactions and find them favorable, they can feel proud. But they should also feel humble; the evaluation has only just begun. May have done a masterful job measuring reactions, but no assurance that any learning has taken place. Nor is that an indication that participants' behavior will change because of training. And still further away is any indication of results that can be attributed to the training. May have done a masterful job measuring reactions, but no assurance that any learning has taken place. Nor is that an indication that participants' behavior will change because of training. And still further away is any indication of results that can be attributed to the training.

Collecting reaction measures after training important: Memory distortion can affect measures taken at a later point. Memory distortion can affect measures taken at a later point. There is often a low return rate for questionnaires mailed to people long after they have completed the training. There is often a low return rate for questionnaires mailed to people long after they have completed the training.

Learning Defined in a limited way: What principles, facts, and techniques were understood and absorbed by trainees? (We're not concerned with on-the- job use of the principles, facts, and techniques.) Defined in a limited way: What principles, facts, and techniques were understood and absorbed by trainees? (We're not concerned with on-the- job use of the principles, facts, and techniques.)

Here are some guideposts for measuring learning: Measure the learning of each trainee so that quantitative results can be determined. Measure the learning of each trainee so that quantitative results can be determined. Use a before-and-after approach so that learning can be related to the program. Use a before-and-after approach so that learning can be related to the program. As much as possible, the learning should be measured on an objective basis. As much as possible, the learning should be measured on an objective basis. Where possible, use a control group (not receiving the training) to compare with the experimental group that receives the training. Where possible, use a control group (not receiving the training) to compare with the experimental group that receives the training. Where possible, analyze the evaluation results statistically so that learning can be proven in terms of correlation or level of confidence. Where possible, analyze the evaluation results statistically so that learning can be proven in terms of correlation or level of confidence.

Behavior - Evaluation of training in terms of on-the- job behavior is more difficult than reaction and learning evals, because one must consider many factors. Here are several guideposts for evaluating training in terms of behavioral changes: Conduct a systematic appraisal of on-the-job performance on a before-and-after basis. Conduct a systematic appraisal of on-the-job performance on a before-and-after basis. The appraisal of performance should be made by one or more of the following groups (the more the better): trainees, trainees' supervisors, subordinates, peers, and others familiar with trainees' on-the-job performance. The appraisal of performance should be made by one or more of the following groups (the more the better): trainees, trainees' supervisors, subordinates, peers, and others familiar with trainees' on-the-job performance. Conduct a statistical analysis to compare before-and- after performance and to relate changes to the training. Conduct a statistical analysis to compare before-and- after performance and to relate changes to the training. Conduct a post-training appraisal three months or more after training so that trainees have an opportunity to put into practice what they learned. Subsequent appraisals may add to validity of the study. Conduct a post-training appraisal three months or more after training so that trainees have an opportunity to put into practice what they learned. Subsequent appraisals may add to validity of the study.

Results The objectives of most training programs can be stated in terms of the desired results, such as reduced costs, higher quality, increased production, and lower rates of employee turnover and absenteeism. The objectives of most training programs can be stated in terms of the desired results, such as reduced costs, higher quality, increased production, and lower rates of employee turnover and absenteeism. It's best to evaluate training programs directly in terms of desired results. But complicated factors can make it difficult to evaluate certain kinds of programs in terms of results. It's best to evaluate training programs directly in terms of desired results. But complicated factors can make it difficult to evaluate certain kinds of programs in terms of results. It's recommended that training directors begin to evaluate using the criteria in the first three steps: reaction, learning, and behavior. It's recommended that training directors begin to evaluate using the criteria in the first three steps: reaction, learning, and behavior.

Utility Analysis Utility Analysis Cost-benefit analysis: compare costs of training program with the benefits received (both monetary and non-monetary) Cost-benefit analysis: compare costs of training program with the benefits received (both monetary and non-monetary) Costs: direct costs, indirect costs, overhead, development costs, and participant compensation Costs: direct costs, indirect costs, overhead, development costs, and participant compensation Benefits: improvement in trainee attitudes, job performance, quality of work, creativity Benefits: improvement in trainee attitudes, job performance, quality of work, creativity

How Should a Training Evaluation Study be Designed? Case Study Case Study Training >>>>Measures Taken After Training Training >>>>Measures Taken After Training Problem- no measures taken prior to training so no way to know whether assertiveness training brought any change Problem- no measures taken prior to training so no way to know whether assertiveness training brought any change Pretest-Posttest Design Pretest-Posttest Design Measures Taken Before Training >> Training >>>Measures Taken After Training Measures Taken Before Training >> Training >>>Measures Taken After Training Little to no value as multitude of unknown factors could be the real cause of change in performance. Little to no value as multitude of unknown factors could be the real cause of change in performance.

A. PRETEST - POSTTEST METHOD 1. Most commonly used method in training. 2. Does not clearly identify training as the reason for improved knowledge or performance. B. AFTER-ONLY DESIGN WITH A CONTROL GROUP 1. Control group is used to determine whether training made a difference. 2. No Pretests are given. 3. Both groups take posttest after training. 4. The after-only design with a control group allows trainers to tell whether changes are due to their programs.

C. PRETEST-POSTTEST DESIGN WITH A CONTROL GROUP 1. Employees are randomly assigned to a treatment group or a control group. 2. Only treatment group receives training. 3. Both groups take a posttest. 4. Advantages a. Pretest results ensure equality between the groups. b. Statistical analysis determines whether differences in posttest results are significant. D. TIME-SERIES DESIGN 1. Uses a number of measures both before and after training. 2. Purpose is to establish individuals' patterns of behavior and then see whether a sudden leap in performance followed a training program. performance followed a training program. 3. Weakness: because of relatively long time period covered, changes in behavior can be attributed to circumstances other than the program. circumstances other than the program.

More Sophisticated Evaluation Designs Solomon Four Group Design – ideal for ascertaining whether a training intervention had a desired effect on training behavior. Unlike the designs discussed – this design involves the use of more than one control group. Solomon Four Group Design – ideal for ascertaining whether a training intervention had a desired effect on training behavior. Unlike the designs discussed – this design involves the use of more than one control group.

Evaluation statistically Preferred choice for analyzing training intervention when considering statistical power & lower costs is Analysis of Variance with an after-only control group design. Preferred choice for analyzing training intervention when considering statistical power & lower costs is Analysis of Variance with an after-only control group design. The next best approach is the Analysis of Covariance using the pretest score as a covariant. The next best approach is the Analysis of Covariance using the pretest score as a covariant.

Self report - Self report - trainees are asked to evaluate themselves on variables related to the purpose of training trainees are asked to evaluate themselves on variables related to the purpose of training measures complicate the measurement of change because of the problems involved in the definition of change itself. measures complicate the measurement of change because of the problems involved in the definition of change itself. 3 Types of Change w/ Self-Report Data are: 3 Types of Change w/ Self-Report Data are: Alpha change Alpha change Beta change Beta change Gamma change Gamma change

Barriers that Discourage Training Evaluation (p ) Top mgmt doesn’t usually require evaluation Top mgmt doesn’t usually require evaluation Most senior-level training mgrs don’t know how to go about evaluating training programs Most senior-level training mgrs don’t know how to go about evaluating training programs Senior-level training managers don’t know what to evaluate Senior-level training managers don’t know what to evaluate Evaluation is perceived as costly & risky. Evaluation is perceived as costly & risky.