Download presentation
Presentation is loading. Please wait.
Published byMerryl Welch Modified over 9 years ago
1
MGTO 324 Recruitment and Selections Validity I (Construct Validity) Kin Fai Ellick Wong Ph.D. Department of Management of Organizations Hong Kong University of Science & Technology
2
Prologue Why validity? Why is it important to personnel selection? –According to Standard for Educational and Psychology Testing (1985) from American Psychological Association, employment testing should follow the following standards “Validity is the most important consideration in test evaluation. The concept refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores.” (p. 9) –In particular, when discussing employment testing “In employment settings tests may be used in conjunction with other information to make predictions or decisions about individual personnel actions. The principal obligation of employment testing is to produce reasonable evidence for the validity of such predictions and decisions.” (p. 59)
3
Prologue A refreshment of some essential concepts –What is validity? The extent to which a test is measuring what it is supposed to measure In the APA’s standard –Validity “refers to the appropriateness, meaningfulness, and usefulness of the specific inferences made from test scores” (p.9) –What is test validation? Establishment of the validity Process examining the extent to which a test is valid May or may not be statistical, depending on which types of evidence being examined
4
Prologue A refreshment of some essential concepts –What is the relationship between reliability and validity? When a test is very low in reliability, it implies that the test scores represent a large amount of random errors… so can a test with low reliability be a valid test? When a test is high in reliability, it implies that the test scores represent something meaningful (i.e., not errors)… so is it also a valid test? Maximum validity coefficient (R 12max )
5
Prologue Reliability of TestReliability of CriterionMaximum Validity (Correlation) 1.0 1.00.81.0.89.61.0.77.41.0.63.21.0.45.01.0.00 1.0.5.71.8.5.63.6.5.55.4.5.45.2.5.32.0.5.00 1.0.0.00.8.0.00.6.0.00.4.0.00.2.0.00.0.00 How Reliability Affects Validity* *The first column shows the reliability of the test. The second column displays the reliability of the validity criterion. The numbers in the third column are the maximum theoretical correlations between tests, given the reliability of the measures. Source: Psychological testing: Principles, applications, and issues (5 th Ed). P.150.
6
Prologue A refreshment of some essential concepts –Three types of validity Content-related validity Construct-related validity Criterion-related validity Face validity??? –Some remarks Validation is a necessary step in test development Never be too rigorous about making distinctions among these three types of validity “All validation is one, and in a sense all is construct validation” (Cronbach, 1980)
7
Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity
8
Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity
9
Part 1: Psychological constructs –What is a psychological construct? Something constructed by mental synthesis Does not exist as a separate thing we can touch or feel Cannot be used as an objective criterion Intelligence, love, curiosity, mental health –Some constructs are conceptually related Self-esteem, General self-efficacy, Self-image Need for power, Aggressiveness, Need for achievement Need for cognition, curiosity –Some constructs are conceptually not so related Need for power vs. Conscientiousness Intelligence vs. Emotional Stability
10
Part 1: Psychological constructs –What is construct validity? The extent to which a test measures a theoretical concept/construct What does it mean? What are the relationships with other constructs? A series of activities in which a research simultaneously defines some constructs and develops the instrumentation to measure them A graduate process –Each time a relationship is demonstrated, a bit of meaning is attached to the test
11
Outline Construct- related Validity Part 1: Psychological Constructs Part 2: Assessing construct validity
12
Construct-related validation –Step 1: Defining the construct –Step 2: Identifying related constructs –Step 3: Identifying unrelated constructs –Step 4: Preliminary assessment of the test validity converging (convergent) divergent (discriminant) evidence –Step 5: Assessing validity using statistical methods –Multi-Trait-Multi-Method (MTMM) Technique –Factor Analysis
13
Part 2: Assessing construct validity Construct-related validation –Step 1: Defining the construct Defining Aggressiveness:
14
Part 2: Assessing construct validity Construct-related validation –Steps 2 & 3: Identifying related and unrelated constructs
15
Part 2: Assessing construct validity Construct-related validation –Steps 2 & 3: Identifying related and unrelated constructs
16
Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity converging (convergent) and divergent (discriminant) evidence ANH A.91 N.54.87 H.11-.03.79 A = Aggressiveness; N = Need for Power; H = Honesty
17
Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity converging (convergent) and divergent (discriminant) evidence Problems… –Tests using similar methods are likely to be correlated with each other to a certain extent –Teacher rating –Conservative teachers vs. liberal teachers
18
Part 2: Assessing construct validity Construct-related validation –Step 4: Preliminary assessment of the test validity Problems…
19
Part 2: Assessing construct validity Construct-related validation –Step 5: Preliminary assessment of the test validity Solutions The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) Multi-trait –Measuring more than one construct –Honesty, Aggressiveness, Intelligence Multi-method –Measured by more than one method –Teacher rating, Tests, Observers’ rating
20
Part 2: Assessing construct validity Construct-related validation –Step 5: Preliminary assessment of the test validity Solutions The multi-trait-multi-method (MTMM) matrix (Campbell & Fiske, 1959) –Tests measuring different constructs with different methods should have low intercorrelation –Tests measuring different constructs with same methods should have moderate intercorrelation (i.e., divergent evidence) –Tests measuring same constructs with different methods should have high intercorrelation (i.e., convergent evidence)
22
Part 2: Assessing construct validity Tests measuring different constructs with different methods should have low intercorrelation Teacher ratings Peer ratings HonestyAggr.Intell. Honesty Aggr..22 Intell..10.13
23
Part 2: Assessing construct validity Tests measuring different constructs with same methods should have moderate intercorrelation (i.e., divergent evidence) Peer ratings HonestyAggr.Intell. Honesty Aggr..40 Intell..22.30
24
Part 2: Assessing construct validity Tests measuring same constructs with different methods should have high intercorrelation (i.e., convergent evidence) Teaching ratings Peer ratings HonestyAggr.Intell. Honesty.62 Aggr..40.70 Intell..22.30.64
25
Part 2: Assessing construct validity Factor Analysis –Is there a general (common) factor determining performance in various subjects? M = math., P = phy., C = chem., E = Eng., H = History., F = French –Determined by a single factor i.e., General Intelligence (I) explains an individual’s performance of all subjects –or there are two factors? i.e., One group of subjects is determined by Quantitative Ability (Q), and another group is determined by Verbal Ability (V)
28
Part 2: Assessing construct validity A single factor model A two factor model
29
Part 2: Assessing construct validity Which one is more correct? –It is determined by Eigenvalue, i.e., the total amount of variance that is accounted by a particular factor When a two-factor model accounts for significantly more variance than a single factor model, we believe that there are two factors When there is no such evidence, a single-factor model is preferred –I’ll show you the steps and interpretations of Factor Analysis results in the coming Workshop
30
Part 2: Assessing construct validity How can I use Factor Analysis to examine construct validity of a test? –If the test is assumed to measure a single construct (e.g., need for power, self-efficacy), a single-factor model is expected Construct validity is evident when Factor Analysis yields a single-factor model –If the test is assumed to have multiple-facets (e.g., Intelligence, including memory, verbal ability, spatial-visual ability, etc.) Construct validity is evident when Factor Analysis yields a model that (a) has multiple factors and (b) theoretically related items are loaded into the same factor
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.