Download presentation
Presentation is loading. Please wait.
Published byΆγνη Ελευθερίου Modified over 6 years ago
1
MANA 4328 Dr. George Benson benson@uta.edu
Combining Test Data MANA 4328 Dr. George Benson 1
2
Selection Decisions First, how to deal with multiple predictors?
Second, how to make a final decision?
3
Interpreting Test Scores
Norm-referenced scores Test scores are compared to applicants or comparison group. Raw scores should be converted to Z scores or percentiles Use “rank ordering” Criterion-referenced scores Test scores indicate a degree of competency NOT compared to other applicants Typically scored as “qualified” vs. “not qualified” Use “cut-off scores”
4
Setting Cutoff Scores Based on the percentage of applicants you need to hire (yield ratio). “Thorndike’s predicted yield” You need 5 warehouse clerks and expect 50 to apply. 5 / 50 = .10 (10%) means 90% of applicants rejected Cutoff Score set at 90th percentile Z score 1 = 84th percentile Based on a minimum proficiency score Based on validation study linked to job analysis Incorporates SEM (validity and reliability)
5
Selection Outcomes PERFORMANCE No Pass Pass PREDICTION Regression Line
Cut Score PERFORMANCE 90% Percentile No Pass Pass PREDICTION
6
Selection Outcomes PERFORMANCE Type 1 Error False Negative True
Positive High Performer Type 2 Error False Positive True Negative Low Performer No Hire Hire PREDICTION
7
Selection Outcomes PERFORMANCE High Performer Low Performer
Prediction Line Cut Score High Performer Low Performer Unqualified Qualified PREDICTION
8
Banding Grouping like test scores together Standard Error of Measure
Function of test reliability Standard Error of Measure Band of + or – 2 SEM 95% Confidence interval If the top score on a test is 95 and SEM is 2, then scores between 95 and 91 should be banded together.
9
Selection Outcomes PERFORMANCE Acceptable Unacceptable Unqualified
Prediction Line Cut Score Acceptable Unacceptable Unqualified Qualified PREDICTION
10
Dealing With Multiple Predictors
“Mechanical” techniques superior to judgment Combine predictors Compensatory or “test assessment approach” Judge each independently Multiple Hurdles / Multiple Cutoff Hybrid selection systems
11
Compensatory Methods Unit weighting Rational weighting Ranking
P1 + P2 + P3 + P4 = Score Rational weighting (.10) P1 + (.30) P2 + (.40) P3 + (.20) P4 = Score Ranking RankP1 + RankP2 +RankP3 + RankP4 = Score Profile Matching D2 = Σ (P(ideal) – P(applicant))2
12
Combined Selection Model
Selection Stage Selection Test Decision Model Applicants Candidates Application Blank Minimum Qualification Hurdle Candidates Finalists Four Ability Tests Work Sample Rational Weighting Finalists Offers Structured Interview Unit Weighting Rank Order Offers Hires Drug Screen Final Interview
13
Final Selection Top Down Selection (Rank) vs. Cutoff scores
Is the predictor linearly related to performance? How reliable are the tests? Top-down method – Rank order Minimum cutoffs – Passing Scores
14
Final Decision Random Selection Ranking Grouping
Role of Discretion or “Gut Feeling”
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.