Presentation is loading. Please wait.

Presentation is loading. Please wait.

MANA 4328 Dr. Jeanne Michalski

Similar presentations


Presentation on theme: "MANA 4328 Dr. Jeanne Michalski"— Presentation transcript:

1 MANA 4328 Dr. Jeanne Michalski michalski@uta.edu
Combining Test Data MANA 4328 Dr. Jeanne Michalski 1

2 Selection Decisions First, how to deal with multiple predictors?
Second, how to make a final decision?

3 Developing a Hiring System
OK, Enough Assessing: Who Do We Hire??!!

4 Interpreting Test Scores
Norm-referenced scores Test scores are compared to applicants or comparison group. Raw scores should be converted to Z scores or percentiles Use “rank ordering” Criterion-referenced scores Test scores indicate a degree of competency NOT compared to other applicants Typically scored as “qualified” vs. “not qualified” Use “cut-off scores”

5 Setting Cutoff Scores Based on the percentage of applicants you need to hire (yield ratio). “Thorndike’s predicted yield” You need 5 warehouse clerks and expect 50 to apply. 5 / 50 = .10 (10%) means 90% of applicants rejected Cutoff Score set at 90th percentile Z score 1 = 84th percentile Based on a minimum proficiency score Based on validation study linked to job analysis Incorporates SEM (validity and reliability)

6 Selection Outcomes PERFORMANCE No Pass Pass PREDICTION Regression Line
Cut Score PERFORMANCE 90% Percentile No Pass Pass PREDICTION

7 Selection Outcomes PERFORMANCE Type 1 Error False Negative True
Positive High Performer Type 2 Error False Positive True Negative Low Performer No Hire Hire PREDICTION

8 Selection Outcomes PERFORMANCE High Performer Low Performer
Prediction Line Cut Score High Performer Low Performer Unqualified Qualified PREDICTION

9 Dealing With Multiple Predictors
“Mechanical” techniques superior to judgment Combine predictors Compensatory or “test assessment approach” Judge each independently Multiple Hurdles / Multiple Cutoff Profile Matching Hybrid selection systems

10 Compensatory Methods Unit weighting Rational weighting Ranking
P1 + P2 + P3 + P4 = Score Rational weighting (.10) P1 + (.30) P2 + (.40) P3 + (.20) P4 = Score Ranking RankP1 + RankP2 +RankP3 + RankP4 = Score Profile Matching D2 = Σ (P(ideal) – P(applicant))2

11 Multiple Regression Approach
Predicted Job perf = a + b1x1 + b2x2 + b3x3 x = predictors; b = optimal weight Issues: Compensatory: assumes high scores on one predictor compensate for low scores on another Assumes linear relationship between predictor scores and job performance (i.e., “more is better”)

12 Multiple Cutoff Approach
Sets minimum scores on each predictor Issues Assumes non-linear relationship between predictors and job performance Assumes predictors are non-compensatory How do you set the cutoff scores?

13 Multiple Cutoff Approach
Sets minimum scores on each predictor Issues Assumes non-linear relationship between predictors and job performance Assumes predictors are non-compensatory How do you set the cutoff scores? If applicant fails first cutoff, why continue?

14 Multiple Hurdle Model Finalist Decision Background Test 1 Test 2
Interview Pass Pass Pass Pass Fail Fail Fail Fail Reject

15 Profile Matching Approach
Emphasizes “ideal” level of KSA e.g., too little attention to detail may produce sloppy work; too much may represent compulsiveness Issues Non-compensatory Small errors in profile can add up to big mistake in overall score Little evidence that it works better

16

17

18 Making Finalist Decisions
Top-Down Strategy Maximizes efficiency, but may need to look at adverse impact issues Banding Strategy Creates “bands” of scores that are statistically equivalent (based on reliability) Then hire from within bands either randomly or based on other factors (inc. diversity)

19 Banding Grouping like test scores together Standard Error of Measure
Function of test reliability Standard Error of Measure Band of + or – 2 SEM 95% Confidence interval If the top score on a test is 95 and SEM is 2, then scores between 95 and 91 should be banded together.

20 Combined Selection Model
Selection Stage Selection Test Decision Model Applicants  Candidates Application Blank Minimum Qualification Hurdle Candidates  Finalists Four Ability Tests Work Sample Rational Weighting Finalists  Offers Structured Interview Unit Weighting Rank Order Offers  Hires Drug Screen Final Interview

21 Alternative Approach Rate each attribute on each tool
Desirable Acceptable Unacceptable Develop a composite rating for each attribute Combining scores from multiple assessors Combining scores across different tools A “judgmental synthesis” of data Use composite ratings to make final decisions

22 Categorical Decision Approach
Eliminate applicants with unacceptable qualifications Then hire candidates with as many desirable ratings as possible Finally, hire as needed from applicants with “acceptable” ratings Optional: “weight” attributes by importance

23 Sample Decision Table

24 More Positions than Applicants

25 More Applicants than Positions


Download ppt "MANA 4328 Dr. Jeanne Michalski"

Similar presentations


Ads by Google