Erich C. Dierdorff and Frederick P. Morgeson

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Issues of Reliability, Validity and Item Analysis in Classroom Assessment by Professor Stafford A. Griffith Jamaica Teachers Association Education Conference.
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 5 Reliability.
VALIDITY AND RELIABILITY
Job Analysis OS352 HRM Fisher January 31, Agenda Follow up on safety discussion Job analysis – foundation of HR – Purpose – Various techniques.
Concept of Measurement
Job Analysis and Rewards
Principles of High Quality Assessment
Chapter 3 Worker Oriented Methods
Classroom Assessment A Practical Guide for Educators by Craig A
Chapter 5 Copyright © Allyn & Bacon 2008 This multimedia product and its contents are protected under copyright law. The following are prohibited by law:
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
Scales and Indices While trying to capture the complexity of a phenomenon We try to seek multiple indicators, regardless of the methodology we use: Qualitative.
Research Methods in Psychology (Pp 1-31). Research Studies Pay particular attention to research studies cited throughout your textbook(s) as you prepare.
Final Study Guide Research Design. Experimental Research.
Classroom Assessments Checklists, Rating Scales, and Rubrics
I/O Psychology “Job Analysis” Hardianto Iridiastadi, Ph.D.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
McMillan Educational Research: Fundamentals for the Consumer, 6e © 2012 Pearson Education, Inc. All rights reserved. Educational Research: Fundamentals.
Measurement MANA 4328 Dr. Jeanne Michalski
Chapter 6 - Standardized Measurement and Assessment
VALIDITY, RELIABILITY & PRACTICALITY Prof. Rosynella Cardozo Prof. Jonathan Magdalena.
Part 2 Support Activities Chapter 04: Job Analysis and Rewards McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
© 2013 by Nelson Education1 Foundations of Recruitment and Selection I: Reliability and Validity.
© 2013 by Nelson Education1 Job Analysis and Competency Models.
Copyright © 2009 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 47 Critiquing Assessments.
BEHAVIOR BASED SELECTION Reducing the risk. Goals  Improve hiring accuracy  Save time and money  Reduce risk.
Understanding Populations & Samples
Understanding Populations & Samples
Copyright ©2016 Cengage Learning. All Rights Reserved
Introduction to Survey Research
MGMT 588 Research Methods for Business Studies
EMPA Statistical Analysis
Classroom Assessments Checklists, Rating Scales, and Rubrics
Standards for Decision Making
JOB ANALYSIS.
Writing a sound proposal
Sampling From Populations
3 Research Design Formulation
Sampling Procedures Cs 12
Analyzing Work and Designing Jobs
Arrangements or patterns for producing data are called designs
Observational Study vs. Experimental Design
Supplementary Table 1. PRISMA checklist
Graduate School of Business Leadership
Classroom Assessments Checklists, Rating Scales, and Rubrics
AXIS critical Appraisal of cross sectional Studies
Meeting-6 SAMPLING DESIGN
Human Resource Management By Dr. Debashish Sengupta
Research in Psychology
Teaching and Educational Psychology
Chapter Three Research Design.
Chapter 2 Analyzing Orgs and Jobs
Welcome.
Chapter Eight: Quantitative Methods
Reliability and Validity of Measurement
Chapter 2 Performance Management Process
What Are Rubrics? Rubrics are components of:
Arrangements or patterns for producing data are called designs
Introduction to Industrial/Organizational Psychology by Ronald Riggio
Job Analysis Prof Srividya Iyengar.
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Job Analysis CHAPTER FOUR Screen graphics created by:
Performance Management
Methodology Week 5.
Introduction to Industrial/Organizational Psychology by Ronald Riggio
Introduction to Industrial/Organizational Psychology by Ronald Riggio
Chapter 3 Worker Oriented Methods
Job Performance Concepts and Measures
Presentation transcript:

Erich C. Dierdorff and Frederick P. Morgeson Effects of Descriptor Specificity and Observability on Incumbent Work Analysis Ratings Erich C. Dierdorff and Frederick P. Morgeson Presented by: Brandi Jackson Valdosta State University

The Study Purpose: Specificity vs. observability examining the effects of descriptor specificity and observability on requirement commonly rated in work analysis Specificity vs. observability Specificity: detail; represents discrete unit of work Observability: observable work behaviors Dierdorff and Morgeson interested in job incumbent ratings on work analysis descriptors and if the specificity and observability of the descriptors affected the ratings Authors’ position is that we need to clearly understand the rating system of work analysis because work analysis is key to creating sound and legally defensible HR systems Discuss the inferences incumbents make when making judgments of importance on worker and job requirements and the differences between these requirements that could possibly affect incumbents’ judgments

Current Research Inferences regarding worker/job requirements Limited understanding of rating differences Subject to error Difficulty in making judgments Competency modeling Point 1: attention has increased in work analysis lit about how ratings involve some form of inference from the rater about job/worker requirements Point 2: work analysis judgments subject to error because they mostly rely on inferences from incumbents on job/worker requirements incumbents have difficulty making judgments on requirements because of the abstract molar concepts (traits), difference may be more noticeable contemporary forms of WA have shifted emphasis on person-oriented requirements, more general requirements involve more complex inferences than more narrowly focused requirements

Research Methods Research design: meta-analysis Sample: Measures: Over 47,000 incumbents spanning over 300 occupations Measures: Five descriptors of job/worker requirements Procedure: Five questionnaires rated using a 5-point importance scale O*NET data Questionnaires divided into 5 separate descriptor questionnaires (tasks, knowledge, responsibility, skills, and traits): all respondents required to complete task questionnaire, other 4 remaining questionnaires randomly assigned O*NET data gathered with a staged sampling process to target organizations that employ incumbents, stratified random sampling was used to identify individual respondents to complete the questionnaires

Results Analysis strategy Variance due to rater Variance due to item Variance component (VC) analysis Meta-analysis Variance due to rater Least when rating tasks Most when rating traits Variance due to item Traits had smallest proportion AS: VC – measuring variability in incumbent ratings, MA – estimate level of interrater reliability, both analyses were conducted for each descriptor type Variance due to rater: significantly and inversely related to occupational complexity for all descriptors Differences in reliability ratings were more notable when comparing tasks to traits

Results

Discussion Authors’ conclusion Limitations Item variance and interrater reliability important to work analysis practice Limitations Definitions of descriptors Use of single-item measures Respondents from existing database Extent of “common language” Point 1: Authors posits that as item variance inc and interrater reliability dec on rating of molecular tasks may inc the quality of WA data Point 2: Def – descriptors in questionnaires accompanied by provisional examples (could lead respondents to answers they wouldn’t otherwise select) bias, Single-item measures used in questionnaires can be described as multidimensional, meanining multiple indicators would be needed and could change results Respondents were selected based on existing data from O*NET, results conditional to quality of existing ratings Transfer of “common language” into HR practice for describing job-specific requirements Strengths: large sample size and characteristics close to total population, utilization of most common descriptors to prevent further low interrater reliabilty, rating fluctuations coincide with results from VC analysis and meta-analysis

Future Research Shift focus to theoretical Variance due to other reasons outside the job Within-descriptor and between-descriptor variance Manipulation of descriptor wording Other sources for work analysis data Different judgments P1: variation of inferential judgments as a function of a range of social, cognitive, and individual factors P2: variance in judgments due to factors outside of the job, KSAO variation in ratings, cross-role data to isolate job differences from non-job differences P3: impact of within- and between-descriptor differences on rating variance, within-descriptor variance has been ignored because between-descriptor variance is more generalizeable and applicable P4: Morgeson and colleauges (2004) showed evidence that simple changes in descriptor wording can significantly alter WA ratings, incumbents’ and supervisors’ ability to interpret the descriptors can significantly effect idiosyncratic variance and interrater reliability P5: expand the research to include collaborative data from incumbents, supervisors, and job analysts P6: Utilization of different judgments (other than judgments of importance) can offer different points of view or “inferential lenses” on ratings and descriptor items

Critical Questions Self-presentation in questionnaires offers respondents the unique opportunity to answer in a manner that could alter the findings. Could the findings that low specificity items are higher in interrater reliability or high specificity items are lower in interrater reliability be due to incumbent responses reflecting what the incumbents believe to be the more altruistic answer?

Critical Questions The O*NET data used to derive the pool of incumbent respondents used single-item measures for each descriptor rating. Dierdorff and Morgeson (2009) have determined that many of the single-item measures could be described as multidimensional constructs needing multiple indicators to be properly assessed. How would the expansion of O*NET’s single-item measures effect the results of this study?

Critical Questions In regards of future research, I believe certain precautions need to be taken to account for self- presentation and self-efficacy in respondents (incumbents and supervisors). What ideas do you have to further the research of work analysis data?