Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reliability and Validity Selection methods

Similar presentations


Presentation on theme: "Reliability and Validity Selection methods"— Presentation transcript:

1 Reliability and Validity Selection methods
Dr Joan Harvey

2 Reliability and Validity
Reliability is consistency of measurement Different interviewers Different items in tests Different raters in LGDs Validity is about what is being measured Construct Predictive Concurrent Face Content

3 Two main issues in assessment
Criterion related Concerns such as the ‘scores’ to be obtained in the criterion measures, I.e. the assessment tools Performance related Much bigger problem these days as we are not very good at sorting out performance attributable to the job incumbent

4 Further issues Selection ratio Subjectivity in assessments Weighting
Equal weights Compensatory Minimum levels for each measure Based on predictive validities Based on past reliabilities Expert judgement Can we select a person without interviewing them? Do we want to? Training of assessors

5 Lots of assessment methods, so criteria for choosing?
Reliability Validity Legality Generality Cost Practicality Professional image Candidate reactions

6 Psychometrics Cronbach (1984)
“a standardized sample of behaviour which can be described by a numerical scale or category system” A quantitative assessment of some psychological attribute Measure MAXIMAL performance Achievement, Ability Measure TYPICAL performance Personality associated, Values, Attitudes, Interests

7 Issues in validation of assessment tools
Rarely conducted adequately Raters often poorly trained or skilled Problems of prediction Criterion problem (error & bias) Restriction of range Small sample size Are used in both R&S and in appraisal

8 Summary Validity of methods doesn’t necessarily match popularity
E.g. interviews, personality tests Organisations often fail to conduct adequate job analysis & validation studies And then there can be problems in legal cases Meta-analysis results can be informative but meta-analysis results should be interpreted with caution Selection is a 2-way process Globalisation presents new challenges for selection and diversity In reality there is often a disparity between theory & practice

9 Business utility of selection
Calculating the cost-benefit of using an ability test Saving per employee per year = (r x SDy x Z) - (C/P) where: r = validity of selection test SDy = standard deviation of employee productivity in £ Z = calibre of recruits (expressed by standard score on test) C = cost per candidate P = proportion of candidates selected

10 Equal opportunities issues
Direct versus indirect discrimination Most cases are on gender R&S Promotion The ‘new’ issue here relates to pay Next most common are on ethnic origin All require that information is retained about the processes, this can include notes and ratings from interviews Can go all the way up to the European Court There is still a glass ceiling, some refer to being female and black as the ‘concrete’ ceiling Role for ‘positive action’?

11 Main R & S methods Interviews- used by >98% dealt with later Tests:
1 to 1, 2 to 1, panel of several to 1: dealt with later Tests: Intellect, systems, aptitudes, personality, interests, management style, stress responses. Exercises Group activities such as LGDs Scenario analyses In-tray exercises Presentations Problem-solving tasks Case study analyses References Application form or cv May all be combined into an Assessment Centre

12 Comparing validities of different selection mehtods

13 Assessment Centres An assessment process, not a place
multiple assessments of a candidate multiple methods include competency interview group exercise psychometrics multiple (trained) assessors systematic process for recording & rating

14 ACs Performance is measured against a pre-determined set of competencies & job-related criteria Assessors should be rigorously trained to observe candidate's performance Overall performance is evaluated on the basis of a combination of all reports i.e. the overall decision is not left to one (possibly) biased assessor

15 ACs, in addition to tests, include:
Group exercises Assigned Unassigned Team exercise Presentation Individual exercises In-tray Written report Role play Presentation Interviews Structured/Situational

16

17 What aspects do ACS cover?

18 Enigma of ACs Discriminant Validity Convergent Validity
ACs assess performance on competencies (dimensions) performance on exercises is independent low inter-correlations – good Convergent Validity evidence for consistency across assessor ratings for exercises not dimensions does this matter?

19 Summary Validity of methods doesn’t necessarily match popularity
E.g. interviews, personality tests Organisations often fail to conduct adequate job analysis & validation studies And then there can be problems in legal cases Meta-analysis results can be informative but meta-analysis results should be interpreted with caution

20 Other methods: Biodata
Used to discover ‘psychologically meaningful personal history variables’ (Mitchell & Klimoski, 1982) Questions have a definite answer Assumes best predictor of future behaviour is past behaviour Atheoretical Uses ‘hard’ & ‘soft’ items hard: ‘attended University’ (biographical) soft: ‘prefers parties to reading books’ (interests) Need to be built specifically, e.g. by assembling relevant items from a pool [skill choosing what is relevant and valid]

21 Other methods: work sample tests
From behavioural consistency theory - ‘past behaviour is the best predictor of future behaviour’ Types of work samples include Psychomotor e.g. typing, sewing, using tools Individual decision-making e.g. in-tray exercises Job-related information tests Group discussions/decisions Trainability tests

22 Performance appraisal
Used for promotions salary decisions career development training needs analysis validating selection procedures informing & motivating staff But can demotivate staff & reduce performance

23 Performance appraisal
Managers’ ratings can be based on direct observation performance data (e.g., sales figures, calls made) self-assessments Other measures such as ratings Sources of bias Often just based on one interview Inadequate training of assessors opportunity to observe halo/horns, like me political & relationship factors

24 Multi-source FB Popular because:
Flatter organisations mean greater span of control & more difficulty for managers Incorporate views of multiple stakeholders - emphasis not on view of one person Encourage involvement of all in improving performance Allow comparison of ratings from different sources

25 360 degree FB A Questionnaire completed by
Self Manager Subordinates Peers Customers etc Based on core work activities Statistical analysis

26 ACs and 360 degree FB Key Questions:
Do different raters use information from different sources? How valid / accurate are ratings provided by these raters? Are ratings provided by certain raters more credible? What about self-awareness? Are there differences in the way people present themselves? What are the implications of these for improving performance?

27 Thank you for listening
Joan Harvey George Erdos


Download ppt "Reliability and Validity Selection methods"

Similar presentations


Ads by Google