Presentation is loading. Please wait.

Presentation is loading. Please wait.

Erich C. Dierdorff and Frederick P. Morgeson

Similar presentations


Presentation on theme: "Erich C. Dierdorff and Frederick P. Morgeson"— Presentation transcript:

1 Erich C. Dierdorff and Frederick P. Morgeson
Effects of Descriptor Specificity and Observability on Incumbent Work Analysis Ratings Erich C. Dierdorff and Frederick P. Morgeson Presented by: Brandi Jackson Valdosta State University

2 The Study Purpose: Specificity vs. observability
examining the effects of descriptor specificity and observability on requirement commonly rated in work analysis Specificity vs. observability Specificity: detail; represents discrete unit of work Observability: observable work behaviors Dierdorff and Morgeson interested in job incumbent ratings on work analysis descriptors and if the specificity and observability of the descriptors affected the ratings Authors’ position is that we need to clearly understand the rating system of work analysis because work analysis is key to creating sound and legally defensible HR systems Discuss the inferences incumbents make when making judgments of importance on worker and job requirements and the differences between these requirements that could possibly affect incumbents’ judgments

3 Current Research Inferences regarding worker/job requirements
Limited understanding of rating differences Subject to error Difficulty in making judgments Competency modeling Point 1: attention has increased in work analysis lit about how ratings involve some form of inference from the rater about job/worker requirements Point 2: work analysis judgments subject to error because they mostly rely on inferences from incumbents on job/worker requirements incumbents have difficulty making judgments on requirements because of the abstract molar concepts (traits), difference may be more noticeable contemporary forms of WA have shifted emphasis on person-oriented requirements, more general requirements involve more complex inferences than more narrowly focused requirements

4 Research Methods Research design: meta-analysis Sample: Measures:
Over 47,000 incumbents spanning over 300 occupations Measures: Five descriptors of job/worker requirements Procedure: Five questionnaires rated using a 5-point importance scale O*NET data Questionnaires divided into 5 separate descriptor questionnaires (tasks, knowledge, responsibility, skills, and traits): all respondents required to complete task questionnaire, other 4 remaining questionnaires randomly assigned O*NET data gathered with a staged sampling process to target organizations that employ incumbents, stratified random sampling was used to identify individual respondents to complete the questionnaires

5 Results Analysis strategy Variance due to rater Variance due to item
Variance component (VC) analysis Meta-analysis Variance due to rater Least when rating tasks Most when rating traits Variance due to item Traits had smallest proportion AS: VC – measuring variability in incumbent ratings, MA – estimate level of interrater reliability, both analyses were conducted for each descriptor type Variance due to rater: significantly and inversely related to occupational complexity for all descriptors Differences in reliability ratings were more notable when comparing tasks to traits

6 Results

7 Discussion Authors’ conclusion Limitations
Item variance and interrater reliability important to work analysis practice Limitations Definitions of descriptors Use of single-item measures Respondents from existing database Extent of “common language” Point 1: Authors posits that as item variance inc and interrater reliability dec on rating of molecular tasks may inc the quality of WA data Point 2: Def – descriptors in questionnaires accompanied by provisional examples (could lead respondents to answers they wouldn’t otherwise select) bias, Single-item measures used in questionnaires can be described as multidimensional, meanining multiple indicators would be needed and could change results Respondents were selected based on existing data from O*NET, results conditional to quality of existing ratings Transfer of “common language” into HR practice for describing job-specific requirements Strengths: large sample size and characteristics close to total population, utilization of most common descriptors to prevent further low interrater reliabilty, rating fluctuations coincide with results from VC analysis and meta-analysis

8 Future Research Shift focus to theoretical
Variance due to other reasons outside the job Within-descriptor and between-descriptor variance Manipulation of descriptor wording Other sources for work analysis data Different judgments P1: variation of inferential judgments as a function of a range of social, cognitive, and individual factors P2: variance in judgments due to factors outside of the job, KSAO variation in ratings, cross-role data to isolate job differences from non-job differences P3: impact of within- and between-descriptor differences on rating variance, within-descriptor variance has been ignored because between-descriptor variance is more generalizeable and applicable P4: Morgeson and colleauges (2004) showed evidence that simple changes in descriptor wording can significantly alter WA ratings, incumbents’ and supervisors’ ability to interpret the descriptors can significantly effect idiosyncratic variance and interrater reliability P5: expand the research to include collaborative data from incumbents, supervisors, and job analysts P6: Utilization of different judgments (other than judgments of importance) can offer different points of view or “inferential lenses” on ratings and descriptor items

9 Critical Questions Self-presentation in questionnaires offers respondents the unique opportunity to answer in a manner that could alter the findings. Could the findings that low specificity items are higher in interrater reliability or high specificity items are lower in interrater reliability be due to incumbent responses reflecting what the incumbents believe to be the more altruistic answer?

10 Critical Questions The O*NET data used to derive the pool of incumbent respondents used single-item measures for each descriptor rating. Dierdorff and Morgeson (2009) have determined that many of the single-item measures could be described as multidimensional constructs needing multiple indicators to be properly assessed. How would the expansion of O*NET’s single-item measures effect the results of this study?

11 Critical Questions In regards of future research, I believe certain precautions need to be taken to account for self- presentation and self-efficacy in respondents (incumbents and supervisors). What ideas do you have to further the research of work analysis data?


Download ppt "Erich C. Dierdorff and Frederick P. Morgeson"

Similar presentations


Ads by Google