Department of Industrial Psychology  Faculty of Economic and Management Sciences Nadia Brits Supervisor: Prof. Deon Meiring ACSG Conference 16 March 2011.

Slides:



Advertisements
Similar presentations
Chapter 13 Combining Multiple Assessments Combining Predictors Individual Assessments Assessment Centers Chapter 13 Combining Multiple Assessments 1.
Advertisements

Sources of unwillingness to communicate in EFL language classrooms.
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Assessment Procedures for Counselors and Helping Professionals, 7e © 2010 Pearson Education, Inc. All rights reserved. Chapter 6 Validity.
Approaches to Learning and Social Identity: Attracting Mature Students into Higher Education Chris Howard and Peter Davies Chris Howard and Peter Davies.
Designing Scoring Rubrics. What is a Rubric? Guidelines by which a product is judged Guidelines by which a product is judged Explain the standards for.
Seven possibly controversial but hopefully useful rules Paul De Boeck K.U.Leuven.
Dr. Hari Singh School of Business Humboldt State University.
Systems Engineering in a System of Systems Context
Developing and validating a stress appraisal measure for minority adolescents Journal of Adolescence 28 (2005) 547–557 Impact Factor: A.A. Rowley.
Test Validity: What it is, and why we care.
Sharon Begley on “the science wars” The social constructionists : “It is not that evil scientists intentionally set out to enshrine the prejudices of the.
Assessment Centre Procedures: Reducing Cognitive Load During the Observation Phase Nanja J. Kolk & Juliette M. Olman Department of Work and Organizational.
Friday, November 14 and Monday, November 17 Evaluating Scientific Argument: Peer Review IPHY 3700 Writing Process Map.
SELECTION & ASSESSMENT SESSION THREE: MEASURING THE EFFECTIVENESS OF SELECTION METHODS.
Individual vs Situational Explanations of Behaviour
Psychology The usefulness of psychological research.
Employment Interview Frequently used to make selection decisions (over 90% usage) Social exchange (interpersonal) process Search for information.
Assessing Critical Thinking Skills Dr. Barry Stein - Professor of Psychology, Director of Planning, Coordinator of TTU Critical Thinking Initiative Dr.
Assessment Center Essentials Kevin R. Murphy Department of Psychology Pennsylvania State University, USA.
Exploring perceptions of assessment centres in light of organisational justice, and how these perceptions are related to perceived organisational ethical.
Thinking Actively in a Social Context T A S C.
The variability of assessment centre validities: Subject to purpose? Kim Dowdeswell, Senior Research Consultant & Industrial Psychologist 30 th Assessment.
Bryman: Social Research Methods, 4 th edition What is a concept? Concepts are: Building blocks of theory Labels that we give to elements of the social.
Exploring the Equivalence and Rater Bias in AC Ratings Prof Gert Roodt – Department of Industrial Psychology and People Management, University of Johannesburg.
PLAN AND ORGANISE ASSESSMENT. By the end of this session, you will have an understanding of what is assessment, competency based assessment, assessment.
Martin Kleinmann1 & Jürgen Deller2 University of Zurich, Switzerland1
Validating Assessment Centers Kevin R. Murphy Department of Psychology Pennsylvania State University, USA.
WELNS 670: Wellness Research Design Chapter 5: Planning Your Research Design.
HOW TO WRITE RESEARCH PROPOSAL BY DR. NIK MAHERAN NIK MUHAMMAD.
Cognitive Psychology in Mathematics Education Contributor© POSbase 2004 The overview of Anderson, Reder, & Simon (2000).Anderson, Reder, & Simon (2000).
Students’ and Faculty’s Perceptions of Assessment at Qassim College of Medicine Abdullah Alghasham - M. Nour-El-Din – Issam Barrimah Acknowledgment: This.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Reliability & Validity
URBDP 591 I Lecture 3: Research Process Objectives What are the major steps in the research process? What is an operational definition of variables? What.
LEVEL 3 I can identify differences and similarities or changes in different scientific ideas. I can suggest solutions to problems and build models to.
Traditional vs. Alternative Assessment
Assessing the Quality of Research
Measurement Validity.
(Elisabeth Prechtl, Ph.D. student at University of Bayreuth, Germany) Predictive Validation of an Intercultural Assessment Centre (Elisabeth Prechtl, Ph.D.
Department of Industrial Psychology  Faculty of Economic and Management Sciences An Investigation of Race and Sex Similarity Effects in Assessment Centers.
Presented By Dr / Said Said Elshama  Distinguish between validity and reliability.  Describe different evidences of validity.  Describe methods of.
Learning Theory From Social to Cognition
Multivariate Analysis and Data Reduction. Multivariate Analysis Multivariate analysis tries to find patterns and relationships among multiple dependent.
Spring 2015 Kyle Stephenson
The role of subjectively perceived social complexity on attitude towards task intricacy Luigi Butera IAREP/SABE, Rome 2008.
What’s coming up….  Ethnocentrism  Nature-nurture  Individual and situational explanations  Determinism and free will  Reductionism and holism  Psychology.
RESEARCH METHODS IN INDUSTRIAL PSYCHOLOGY & ORGANIZATION Pertemuan Matakuliah: D Sosiologi dan Psikologi Industri Tahun: Sep-2009.
Chapter 6 - Standardized Measurement and Assessment
Personality 1: Trait Theories and Measurement Josée L. Jarry, Ph.D., C.Psych. Introduction to Psychology Department of Psychology University of Toronto.
Traditional vs. Alternative Assessment Assessment is the process of finding out how well students have mastered the curriculum.
CERTIFICATE IN ASSESSING VOCATIONAL ACHIEVEMENT (CAVA) Unit 1: Understanding the principles and practices of assessment.
Designing Scoring Rubrics
Theoretical issues Traits capture relatively stable individual differences. They are assumed to be relatively stable over time. They are also assumed to.
Oleh: Beni Setiawan, Wahyu Budi Sabtiawan
Where is the “psychology” in assessment center ratings?
Critical / Academic Reading
Critical / Academic Reading
Reliability and Validity of Measurement
MEMORIZE THIS PROPORTION OF VARIANCE IN STUDENT GAIN SCORES-- READING, MATH-- EXPLAINED BY LEVEL--PROSPECTS STUDY STUDENTS 28% R 19% M SCHOOLS 12% R
Theoretical issues Traits capture relatively stable individual differences. Traits are assumed to be relatively stable over time. Traits are also assumed.
Chapter 13 Individual and Group Assessment
Title: Introduction to Topic C- Nature Nurture Debate
Research Methods in Psychology
Chapter 13 Individual and Group Assessment
70 Years Department and Specialty "Commodity Science“
Theoretical issues Meaningful differences between individuals
XII INTERNATIONAL CONFERENCE FOR YOUNG RESEARCHERS
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Validity of Selection Procedures
Presentation transcript:

Department of Industrial Psychology  Faculty of Economic and Management Sciences Nadia Brits Supervisor: Prof. Deon Meiring ACSG Conference 16 March 2011 EVOLUTION OF THE CONSTRUCT-VALIDITY DEBATE Department of Industrial Psychology University of Stellenbosch

TODAY’S JOURNEY SACKETT & DREHER (1982)DESIGN FIXES CONSEQUENCES OF INVALID CONSTRUCTS DESIGN FIXES: TRIED & TESTED ABANDON DIMENSIONS DON’T TAKE AWAY MY DIMENSIONS EPIC OF THE CV DEBATE CRITICS OF LANCE (2008) ACs AT A CROSSROADS

Sufficient evidence for CV exists (Arthur et al., 2000; Thornton & Gibbons, 2008) The validity of ACs is questioned CONSTRUCT VALIDITY PUZZLE (Lievens, Chasteen, Day & Christiansen, 2006) CONSTRUCT-RELATED VALIDITY PARADOX (Arthur, Day & Woehr, 2008) SO-CALLED CONSTRUCT VALIDITY PROBLEM (Howard, 1997) BACKGROUND

ORIGINS OF THE CONSTRUCT-VALIDITY DEBATE Sackett & Dreher (1982) Expectations: CONVERGENT VALIDITY DISCRIMINANT VALIDITY ratings cluster according to DIMENSIONS, not exercises (which became known as EXERCISE EFFECT) Similar research followed

CONSEQUENCES OF INVALID CONSTRUCTS SELECTION of applicants based on AC performance ratings FEEDBACK based on AC results WASTE OF TIME AND MONEY Practitioners provide MISLEADING services to companies who appoint them to design and run ACs

DESIGN FIXES DEFINITION of dimensions NUMBER of dimensions TRANSPARENCY of dimensions to candidates Behavioural CHECKLISTS Type of SCORING METHOD ASSESSOR training TYPE of assessor (Lievens & Klimoski, 2001; Gaugler et al., 1987) EXPERIENCED assessors (Kolk et al. 2002; Thornton & Rupp, 2005).

DESIGN FIXES: TRIED AND TESTED Design fixes show LITTLE, INSIGNIFICANT IMPROVEMENT Controlling assessor variance has only a MARGINAL EFFECTS on construct-validity EXERCISE EFFECTS STILL DOMINATE What else can we do?

ABANDONING DIMENSIONS Move from dimensions-based ACs to task-based AC Why? Recurring exercise effects Exercise factors show positive correlations with external performance criteria

DON’T TAKE AWAY MY DIMENSIONS useless to learn a task that the participant might never encounter again Human performance is multidimensional cannot capture the full complexity of a real job (novel, non- repetitive tasks) Only dimensions will allow generalisation of AC results Research supports dimensions (Connelly et al., 2008; Melchers & Konig, 2008; Bowler & Woehr, 2006) knowledge about exercise-based ACs still lacking (Lievens, 2008).

RESULT : 3 LINES OF RESEARCH EXERCISES and DIMENSIONS explained same amount of variance – 34% (Lievens & Conway, 2001) EXERCISE EFFECT were larger than dimension effects – 52% (Lance et al., 2004) EXERCISES explained most variance – 33%, DIMENSIONS ALSO explained substantial amount - 22% (Bowler & Woehr, 2006)

EPIC OF THE HEATED DEBATE LANCE (2008): “ACs do not work they way they are supposed to” NOT SUFFICIENT EVIDENCE for 3 requirements of CV Supports the EXERCISE-BASED MODEL DISCARD DESIGN FIXES Exercise effects represent cross-situational specificity in candidate performance, not method bias...TRUE VARIANCE

COMMENTS/CRITICS TOWARDS LANCE (2008) ignores evidence of both dimension and exercises account for variance in AC performance (Howard, 2008) design fixes should continue to be investigated (Schleicher et al., 2008; Howard, 2008; Arthur, Day & Woehr, 2008; Melchers & König, 2008) Lance received support for candidates’ inconsistent performance....true variance individuals use set of stable skills and can adjust and adapt KSA’s according to the situation Some people perform better than others in a specific exercise

ACs AT A CROSSROADS Persistent EXERCISE EFFECTS performance variability: more situation-specific (57%) than situation-consistent (43%) (Hoeft and Schuler’s (2001) Lack of consensus on solutions and future directions Walter Mischel (1968) Consistency in behaviour ONLY when situational factors are acknowledged and taken into account TRAIT-ACTIVATION THEORY (TAT)

person-situation interaction to explain behaviour on the basis of responses to trait-relevant cues found in situations. SITUATION STRENGTH: strong vs weak situations SITUATION RELEVENCE: A situation is considered relevant to a trait if it provides cues for the expression of trait- relevant behaviour

APPROACHES TO IMPLEMENT TAT Adapting the CONTENT of the exercise Influencing the INSTRUCTIONS of each exercise to guide participants what type of behaviour to show training ROLEPLAYERS on how to interact in order to elicit certain behaviours from participants LARGE NUMBER OF SHORTER EXERCISES to obtains samples of performance on a number of independent tasks

Halaand and Christiansen (2002) found stronger convergence of AC ratings GAP IN RESEARCH about effectiveness of TAT

TO SUMMARISE SACKETT & DREHER (1982)DESIGN FIXES CONSEQUENCES OF INVALID CONSTRUCTS DESIGN FIXES: TRIED & TESTED ABANDON DIMENSIONS DON’T TAKE AWAY MY DIMENSIONS EPIC OF THE CV DEBATE CRITICS OF LANCE (2008) ACs AT A CROSSROADS