MGTO 324 Recruitment and Selections Validity I (Construct Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University.

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Cal State Northridge Psy 427 Andrew Ainsworth PhD
© 2006 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Validity and Reliability Chapter Eight.
VALIDITY AND RELIABILITY
Psychological Testing Principle Types of Psychological Tests  Mental ability tests Intelligence – general Aptitude – specific  Personality scales Measure.
Part II Sigma Freud & Descriptive Statistics
Part II Sigma Freud & Descriptive Statistics
RESEARCH METHODS Lecture 18
Chapter 4 Validity.
Test Validity: What it is, and why we care.
Education 795 Class Notes Factor Analysis II Note set 7.
Reliability & Validity the Bada & Bing of YOUR tailored survey design.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Chapter 7 Evaluating What a Test Really Measures
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Understanding Validity for Teachers
Test Validity S-005. Validity of measurement Reliability refers to consistency –Are we getting something stable over time? –Internally consistent? Validity.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
PhD Research Seminar Series: Reliability and Validity in Tests and Measures Dr. K. A. Korb University of Jos.
Instrumentation.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
MEASUREMENT CHARACTERISTICS Error & Confidence Reliability, Validity, & Usability.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
I/O Psychology Research Methods. What is Science? Science: Approach that involves the understanding, prediction, and control of some phenomenon of interest.
Principles of Test Construction
MGTO 324 Recruitment and Selections Validity II (Criterion Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University.
Validity & Practicality
MGTO 231 Human Resources Management Personnel selection II Dr. Kin Fai Ellick WONG.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Validity Is the Test Appropriate, Useful, and Meaningful?
Tests and Measurements Intersession 2006.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Measurement Validity.
1 Validity – Outline 1. Definition 2. Two different views: Traditional 3. Two different views: CSEPT 4. Face Validity 5. Content Validity: CSEPT 6. Content.
Types of Validity Content Validity Criterion Validity Construct Validity Predictive Validity Concurrent Validity Convergent Validity Discriminant Validity.
Validity Validity: A generic term used to define the degree to which the test measures what it claims to measure.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Testing, Measurement & Assessment Unit 5 Seminar - Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
Validity and Item Analysis Chapter 4. Validity Concerns what the instrument measures and how well it does that task Not something an instrument has or.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
Week 4 Slides. Conscientiousness was most highly voted for construct We will also give other measures – protestant work ethic and turnover intentions.
Multivariate Analysis and Data Reduction. Multivariate Analysis Multivariate analysis tries to find patterns and relationships among multiple dependent.
Education 795 Class Notes Factor Analysis Note set 6.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Chapter 6 - Standardized Measurement and Assessment
Issues in Personality Assessment
2. Main Test Theories: The Classical Test Theory (CTT) Psychometrics. 2011/12. Group A (English)
Lab 6 Validity. Picking a Topic for Your Paper Were you able to come up with 3 ideas? Let’s chat about some of the ideas to make sure we’re all on the.
WHS AP Psychology Unit 7: Intelligence (Cognition) Essential Task 7-3:Explain how psychologists design tests, including standardization strategies and.
Validity. How do they look? Math aptitude test Depression scale How many books do you usually read in a month? What is your favorite snack? Do you take.
Survey Methodology Reliability and Validity
MGMT 588 Research Methods for Business Studies
Reliability and Validity
Unit 8: Intelligence (Cognition)
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Reliability and Validity in Research
Concept of Test Validity
Test Validity.
Evaluation of measuring tools: validity
Journalism 614: Reliability and Validity
Human Resource Management By Dr. Debashish Sengupta
Reliability and Validity of Measurement
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Measurement Concepts and scale evaluation
Cal State Northridge Psy 427 Andrew Ainsworth PhD
Presentation transcript:

MGTO 324 Recruitment and Selections Validity I (Construct Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University of Science & Technology

Prologue Why validity? Why is it important to personnel selection? –According to Standard for Educational and Psychology Testing (1985) from American Psychological Association, employment testing should follow the following standards “Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.” (p. 9) –In particular, when discussing employment testing “In employment settings tests may be used in conjunction with other information to make predictions or decisions about individual personnel actions. The principal obligation of employment testing is to produce reasonable evidence for the validity of such predictions and decisions.” (p. 59)

Prologue A refreshment of some essential concepts –What is validity? The extent to which a test is measuring what it is supposed to measure In the APA’s standard –Validity “refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores” (p.9) –What is test validation? Establishment of the validity Process examining the extent to which a test is valid May or may not be statistical, depending on which types of evidence being examined

Prologue A refreshment of some essential concepts –What is the relationship between reliability and validity? When a test is very low in reliability, it implies that the test scores represent a large amount of random errors… so can a test with low reliability be a valid test? When a test is high in reliability, it implies that the test scores represent something meaningful (i.e., not errors)… so is it also a valid test? Maximum validity coefficient (R 12max )

Prologue Reliability of TestReliability of CriterionMaximum Validity (Correlation) How Reliability Affects Validity* *The first column shows the reliability of the test. The second column displays the reliability of the validity criterion. The numbers in the third column are the maximum theoretical correlations between tests, given the reliability of the measures. Source: Psychological testing: Principles, applications, and issues (5 th Ed). P.150.

Prologue A refreshment of some essential concepts –Three types of validity Content-related validity Construct-related validity Criterion-related validity Face validity??? –Some remarks Validation is a necessary step in test development Never be too rigorous about making distinctions among these three types of validity “All validation is one, and in a sense all is construct validation” (Cronbach, 1980)

Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity

Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity

Part 1: Psychological constructs –What is a psychological construct? Something constructed by mental synthesis Does not exist as a separate thing we can touch or feel Cannot be used as an objective criterion Intelligence, love, curiosity, mental health –Some constructs are conceptually related Self-esteem, General self-efficacy, Self-image Need for power, Aggressiveness, Need for achievement Need for cognition, curiosity –Some constructs are conceptually not so related Need for power vs. Conscientiousness Intelligence vs. Emotional Stability

Part 1: Psychological constructs –What is construct validity? The extent to which a test measures a theoretical concept/construct What does it mean? What are the relationships with other constructs? A series of activities in which a research simultaneously defines some constructs and develops the instrumentation to measure them A graduate process –Each time a relationship is demonstrated, a bit of meaning is attached to the test

Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity

Construct-related validation –Step 1: Defining the construct –Step 2: Identifying related constructs –Step 3: Identifying unrelated constructs –Step 4: Preliminary assessment of the test validity converging (convergent) divergent (discriminant) evidence –Step 5: Assessing validity using statistical methods –Multi-Trait-Multi-Method (MTMM) Technique –Factor Analysis

Part 2: Assessing construct validity Construct-related validation –Step 1: Defining the construct Defining Aggressiveness:

Part 2: Assessing construct validity Construct-related validation –Steps 2 & 3: Identifying related and unrelated constructs

Part 2: Assessing construct validity Construct-related validation –Steps 2 & 3: Identifying related and unrelated constructs

Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity converging (convergent) and divergent (discriminant) evidence ANH A.91 N H A = Aggressiveness; N = Need for Power; H = Honesty

Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity converging (convergent) and divergent (discriminant) evidence Problems… –Tests using similar methods are likely to be correlated with each other to a certain extent –Teacher rating –Conservative teachers vs. liberal teachers

Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity Problems…

Part 2: Assessing construct validity Construct-related validation –Step 5: Preliminary assessment of the test validity Solutions The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) Multi-trait –Measuring more than one construct –Honesty, Aggressiveness, Intelligence Multi-method –Measured by more than one method –Teacher rating, Tests, Observers’ rating

Part 2: Assessing construct validity Construct-related validation –Step 5: Preliminary assessment of the test validity Solutions The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) –Tests measuring different constructs with different methods should have low intercorrelation –Tests measuring different constructs with same methods should have moderate intercorrelation (i.e., divergent evidence) –Tests measuring same constructs with different methods should have high intercorrelation (i.e., convergent evidence)

Part 2: Assessing construct validity Tests measuring different constructs with different methods should have low intercorrelation Teacher ratings Peer ratings HonestyAggr.Intell. Honesty Aggr..22 Intell

Part 2: Assessing construct validity Tests measuring different constructs with same methods should have moderate intercorrelation (i.e., divergent evidence) Peer ratings HonestyAggr.Intell. Honesty Aggr..40 Intell

Part 2: Assessing construct validity Tests measuring same constructs with different methods should have high intercorrelation (i.e., convergent evidence) Teaching ratings Peer ratings HonestyAggr.Intell. Honesty.62 Aggr Intell

Part 2: Assessing construct validity Factor Analysis –Is there a general (common) factor determining performance in various subjects? M = math., P = phy., C = chem., E = Eng., H = History., F = French –Determined by a single factor i.e., General Intelligence (I) explains an individual’s performance of all subjects –or there are two factors? i.e., One group of subjects is determined by Quantitative Ability (Q), and another group is determined by Verbal Ability (V)

Part 2: Assessing construct validity A single factor model A two factor model

Part 2: Assessing construct validity Which one is more correct? –It is determined by Eigenvalue, i.e., the total amount of variance that is accounted by a particular factor When a two-factor model accounts for significantly more variance than a single factor model, we believe that there are two factors When there is no such evidence, a single-factor model is preferred –I’ll show you the steps and interpretations of Factor Analysis results in the coming Workshop

Part 2: Assessing construct validity How can I use Factor Analysis to examine construct validity of a test? –If the test is assumed to measure a single construct (e.g., need for power, self-efficacy), a single-factor model is expected Construct validity is evident when Factor Analysis yields a single-factor model –If the test is assumed to have multiple-facets (e.g., Intelligence, including memory, verbal ability, spatial-visual ability, etc.) Construct validity is evident when Factor Analysis yields a model that (a) has multiple factors and (b) theoretically related items are loaded into the same factor