Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reliability, validity, and scaling

Similar presentations


Presentation on theme: "Reliability, validity, and scaling"— Presentation transcript:

1 Reliability, validity, and scaling

2 Basics What is reliability? What is validity?
Can you have one without the other? Why or why not?

3 Reliability Observed score = True score + random error + systematic error. Explain. Why is this important? How can you decrease these errors? What does a reliability estimate of .85 tell you? What do we want our reliability estimates to be?

4 Types of Reliability What are each, how do you calculate them, when would you use them, and how can you increase them? Inter-rater reliability Test-retest reliability Parallel-forms reliability

5 Internal consistency reliability
What are each, how do you calculate them, what do they tell you, where do you want values to be, and how can you increase them? Average inter-item correlation Average item-total Split-half reliability Cronbach’s alpha Kuder-Richardson Formula 20 (KR-20)

6 Cronbach’s alpha a Schmitt, 1996
Write down what you knew about alpha before reading the article List 2 things you learned from the article

7 Uses and abuses of alpha
What is the difference between internal consistency and homogeneity? Which does alpha tell you? What increases alpha? What is a problem with using alpha to correct correlations for reliability? Why is a high alpha not necessarily a good thing?

8 Sample SPSS analyses

9

10

11

12 Construct Validity How does construct validity relate to internal and external validity? What are each of these, how would you calculate them, and what do they tell you? Translational validity Face validity Content validity Criterion-related validity Predictive validity Concurrent validity (aka known groups) Convergent validity Discriminant validity/Divergent validity How high/low should your correlations be?

13 Multitrait-multimethod matrix (MTMM)
Nomological network Cronbach & Meehl, 1955 MTMM Campbell & Fiske, 1959 Look at example p. 69 What information does this give you? Pattern matching Advantages/disadvantages How can SEM be used to show this?

14 Design Threats to construct validity
What are these, and how can the problem be decreased? Inadequate preoperational explication of constructs Mono-operation bias Mono-method bias Interaction of different treatments Interaction of testing and treatment Restricted generalizability across constructs Confounding constructs and levels of constructs

15 Social threats to CV What are these, and how can the problem be decreased? Hypothesis guessing Evaluation apprehension Experimenter expectancies Other threats Social desirability Response styles Demand characteristics

16 Method variance/Method bias
Podsakoff, MacKenzie, & Podsakoff, 2012 What is it? What are some types and causes? What effects does it have? Why does it have these effects?

17 How can you deal with method bias?
Use more than one method—get predictor and criterion from different sources Separate measures temporally, proximally, or psychologically Use different types and points on scales Label all points on scales, and make items less ambiguous Decrease tendencies for socially desirable responses Reverse score items Try to increase motivation and ability of participants Control for bias statistically (several options) Make it a manipulation or look for interaction effects

18 Controlling for other variables
Westfall & Tarkoni, 2016 What is the main point of this article?

19 Statistical control Ways to test: Regression methods for control
Incremental validity If 2 measures are distinct If 1 measure is “better” than another Regression methods for control What is the problem with using this method? When will it be more of a problem? How can you correct this problem?

20 Table 1. Type 1 error rates for a few parameter combinations.
Westfall J, Yarkoni T (2016) Statistically Controlling for Confounding Constructs Is Harder than You Think. PLOS ONE 11(3): e

21 Fig 12. Power to detect incremental validity using SEM.
Westfall J, Yarkoni T (2016) Statistically Controlling for Confounding Constructs Is Harder than You Think. PLOS ONE 11(3): e

22 Levels of measurement Why do they matter?
What are the four types? Examples? Stats that can be run on them? Nominal Ordinal Interval Ratio What types of scales do we typically use in psychology? Is that a problem?

23 Basics of scaling Scales vs. response scales vs. index vs. questionnaire (every set of questions is not a scale) What are unidimensional vs. multidimensional scales? When should you use one vs. the other?

24 Types of (Unidimensional) scales
Thurstone (method of equally appearing intervals) Generate items Have judges rate them Choose ones that represent the whole scale Guttman (cumulative scale) Coefficient of reproducibility Likert (summative rating scale) Semantic differential scale What are the advantages and disadvantages of each? How do you score each?

25 Thurstone I believe the church is the greatest institution in America today. (.2) I believe in religion, but I seldom go to church. (5.4) I believe in sincerity and goodness without any church ceremonies. (6.7) I believe the church is a hindrance to religion for it still depends on magic, superstition, and myth. (9.6) I think the church is a parasite on society. (11.0)

26 Guttman I am more than 54 inches tall. I am more than 56 inches tall.

27 Other issues with scaling
Standardization Norms

28 Steps To choosing a scale to use? To creating a scale?

29 Steps to creating an index
Conceptualize the index Operationalize and measure the components Develop the rules for calculating the index score (weighting?) Validate it!

30 Methods assignment #1

31 Next week Surveys ESM presentation Scale development assignment due
Chapter Articles on data cleaning Short articles on survey design ESM presentation Scale development assignment due Decision about Feb. 28


Download ppt "Reliability, validity, and scaling"

Similar presentations


Ads by Google