Reliability and Validity Selection methods

Slides:



Advertisements
Similar presentations
World Class Selection June 2008 North 51 - A team you can rely on The Psychology of Selection.
Advertisements

MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
VALIDITY.
Personnel selection HRM: leading teams / Prof. Grote Personnel selection methods Dr. Hannes Günter.
Concept of Reliability and Validity. Learning Objectives  Discuss the fundamentals of measurement  Understand the relationship between Reliability and.
Validity Does test measure what it says it does? Is the test useful? Can a test be reliable, but not valid? Can a test be valid, but not reliable?
Selection I: Applicant Screening
© 2013 by Nelson Education1 Selection II: Testing.
Chapter 7 Evaluating What a Test Really Measures
DEFINING JOB PERFORMANCE AND ITS RELATIONSHIP TO ASSESSMENTS.
Performance Management
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Welcome Assessment Centres David Phillips Senior Assessment Partner DfT Resourcing Group.
Torrington, Hall & Taylor, Human Resource Management 6e, © Pearson Education Limited 2005 Slide 7.1 Importance of Selection The search for the perfect.
HR Session 5 Performance Management and Appraisal Dr. Debra Munsterman
Human Resource Management Selection Methods
Measurement in Exercise and Sport Psychology Research EPHE 348.
1 SELECTION 2BC3 Week 5 ________________________ Dr. Teal McAteer DeGroote School of Business McMaster University.
Chapter 7 Selection Group 7 August 24, Employee Selection Selection is the process of choosing from a group of applicants those individuals best.
Understanding Meaning and Importance of Competency Based Assessment
1 Chapter 8 Selection I: Models of Testing. © 2013 by Nelson Education2 1.When is the use of testing, psychological and/or physical, a distinct advantage.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Assessing the Quality of Research
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Validity.
CHAPTER 5 Evaluating Employee Performance
Recruitment, Retention, Selection Development and Retention of Personnel Educ 567 Summer 2009 Thomas DiPaola, Ph.D.
Presented By Dr / Said Said Elshama  Distinguish between validity and reliability.  Describe different evidences of validity.  Describe methods of.
SOCW 671: #5 Measurement Levels, Reliability, Validity, & Classic Measurement Theory.
CHAPTER 6 Selecting Employees and Placing Them in Jobs
Chapter 6 - Standardized Measurement and Assessment
Recruiting and Retaining People Lecture 6: Evaluating Recruitment and Selection.
ASSESSMENT CENTRES 16. OBJECTIVES To Understand Concept of Assessment Centre Difference between Assessment and Development Centre Designing of Assessment.
ESTABLISHING RELIABILITY AND VALIDITY OF RESEARCH TOOLS Prof. HCL Rawat Principal UCON,BFUHS Faridkot.
© 2013 by Nelson Education1 Decision Making. Chapter Learning Outcomes  After reading this chapter you should:  Appreciate the complexity of decision.
Lecture 14. Technology Center Assessment. QUESTIONS: Assessment- Center- as technology assessment, development and certification of personnel. Model of.
The Most Effective Tool to Measure SOFT SKILLS
6 Selecting Employees and Placing Them in Jobs
Managing People in Organisations
Chapter 8 Selection. chapter 8 Selection Selection – the process by which an organization chooses from a list of applicants the person or persons who.
Selection methods: tests
Chapter 4 Research Methods in Clinical Psychology
VALIDITY by Barli Tambunan/
Lecture 5 Validity and Reliability
Facet5 Audition Module Facilitator Date Year.
Concept of Test Validity
HRM – UNIT 10 Elspeth Woods 9 May 2013
CHAPTER 5 Evaluating Employee Performance
Dessler, Cole, and Sutherland
Journalism 614: Reliability and Validity
Performance Management and Appraisal
Managing performance What is it? Why? How?.
Performance Appraisal Basics
Introduction to Measurement
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
Educational Research CECS 5610
Chapter 11 Managing Human Resource Systems
Reliability and Validity of Measurement
Unit IX: Validity and Reliability in nursing research
Chapter 13 Individual and Group Assessment
RESEARCH METHODS Lecture 18
5 6 Selecting Employees C H A P T E R Training Employees
Current Tends in Psychometric Assessment
CHAPTER 6 Evaluating Employee Performance
Measurement Concepts and scale evaluation
Chapter 8 VALIDITY AND RELIABILITY
Managing Employee Performance and Reward
Chapter 6 Selecting Employees
Presentation transcript:

Reliability and Validity Selection methods Dr Joan Harvey

Reliability and Validity Reliability is consistency of measurement Different interviewers Different items in tests Different raters in LGDs Validity is about what is being measured Construct Predictive Concurrent Face Content

Two main issues in assessment Criterion related Concerns such as the ‘scores’ to be obtained in the criterion measures, I.e. the assessment tools Performance related Much bigger problem these days as we are not very good at sorting out performance attributable to the job incumbent

Further issues Selection ratio Subjectivity in assessments Weighting Equal weights Compensatory Minimum levels for each measure Based on predictive validities Based on past reliabilities Expert judgement Can we select a person without interviewing them? Do we want to? Training of assessors

Lots of assessment methods, so criteria for choosing? Reliability Validity Legality Generality Cost Practicality Professional image Candidate reactions

Psychometrics Cronbach (1984) “a standardized sample of behaviour which can be described by a numerical scale or category system” A quantitative assessment of some psychological attribute Measure MAXIMAL performance Achievement, Ability Measure TYPICAL performance Personality associated, Values, Attitudes, Interests

Issues in validation of assessment tools Rarely conducted adequately Raters often poorly trained or skilled Problems of prediction Criterion problem (error & bias) Restriction of range Small sample size Are used in both R&S and in appraisal

Summary Validity of methods doesn’t necessarily match popularity E.g. interviews, personality tests Organisations often fail to conduct adequate job analysis & validation studies And then there can be problems in legal cases Meta-analysis results can be informative but meta-analysis results should be interpreted with caution Selection is a 2-way process Globalisation presents new challenges for selection and diversity In reality there is often a disparity between theory & practice

Business utility of selection Calculating the cost-benefit of using an ability test Saving per employee per year = (r x SDy x Z) - (C/P) where: r = validity of selection test SDy = standard deviation of employee productivity in £ Z = calibre of recruits (expressed by standard score on test) C = cost per candidate P = proportion of candidates selected

Equal opportunities issues Direct versus indirect discrimination Most cases are on gender R&S Promotion The ‘new’ issue here relates to pay Next most common are on ethnic origin All require that information is retained about the processes, this can include notes and ratings from interviews Can go all the way up to the European Court There is still a glass ceiling, some refer to being female and black as the ‘concrete’ ceiling Role for ‘positive action’?

Main R & S methods Interviews- used by >98% dealt with later Tests: 1 to 1, 2 to 1, panel of several to 1: dealt with later Tests: Intellect, systems, aptitudes, personality, interests, management style, stress responses. Exercises Group activities such as LGDs Scenario analyses In-tray exercises Presentations Problem-solving tasks Case study analyses References Application form or cv May all be combined into an Assessment Centre

Comparing validities of different selection mehtods

Assessment Centres An assessment process, not a place multiple assessments of a candidate multiple methods include competency interview group exercise psychometrics multiple (trained) assessors systematic process for recording & rating

ACs Performance is measured against a pre-determined set of competencies & job-related criteria Assessors should be rigorously trained to observe candidate's performance Overall performance is evaluated on the basis of a combination of all reports i.e. the overall decision is not left to one (possibly) biased assessor

ACs, in addition to tests, include: Group exercises Assigned Unassigned Team exercise Presentation Individual exercises In-tray Written report Role play Presentation Interviews Structured/Situational

What aspects do ACS cover?

Enigma of ACs Discriminant Validity Convergent Validity ACs assess performance on competencies (dimensions) performance on exercises is independent low inter-correlations – good Convergent Validity evidence for consistency across assessor ratings for exercises not dimensions does this matter?

Summary Validity of methods doesn’t necessarily match popularity E.g. interviews, personality tests Organisations often fail to conduct adequate job analysis & validation studies And then there can be problems in legal cases Meta-analysis results can be informative but meta-analysis results should be interpreted with caution

Other methods: Biodata Used to discover ‘psychologically meaningful personal history variables’ (Mitchell & Klimoski, 1982) Questions have a definite answer Assumes best predictor of future behaviour is past behaviour Atheoretical Uses ‘hard’ & ‘soft’ items hard: ‘attended University’ (biographical) soft: ‘prefers parties to reading books’ (interests) Need to be built specifically, e.g. by assembling relevant items from a pool [skill choosing what is relevant and valid]

Other methods: work sample tests From behavioural consistency theory - ‘past behaviour is the best predictor of future behaviour’ Types of work samples include Psychomotor e.g. typing, sewing, using tools Individual decision-making e.g. in-tray exercises Job-related information tests Group discussions/decisions Trainability tests

Performance appraisal Used for promotions salary decisions career development training needs analysis validating selection procedures informing & motivating staff But can demotivate staff & reduce performance

Performance appraisal Managers’ ratings can be based on direct observation performance data (e.g., sales figures, calls made) self-assessments Other measures such as ratings Sources of bias Often just based on one interview Inadequate training of assessors opportunity to observe halo/horns, like me political & relationship factors

Multi-source FB Popular because: Flatter organisations mean greater span of control & more difficulty for managers Incorporate views of multiple stakeholders - emphasis not on view of one person Encourage involvement of all in improving performance Allow comparison of ratings from different sources

360 degree FB A Questionnaire completed by Self Manager Subordinates Peers Customers etc Based on core work activities Statistical analysis

ACs and 360 degree FB Key Questions: Do different raters use information from different sources? How valid / accurate are ratings provided by these raters? Are ratings provided by certain raters more credible? What about self-awareness? Are there differences in the way people present themselves? What are the implications of these for improving performance?

Thank you for listening Joan Harvey George Erdos