Scales & Indices.

Slides:



Advertisements
Similar presentations
Test Development.
Advertisements

Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Standardized Scales.
Psychology Practical (Year 2) PS2001 Correlation and other topics.
Some (Simplified) Steps for Creating a Personality Questionnaire Generate an item pool Administer the items to a sample of people Assess the uni-dimensionality.
Psychological Testing Principle Types of Psychological Tests  Mental ability tests Intelligence – general Aptitude – specific  Personality scales Measure.
Reliability Analysis. Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and.
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
When Measurement Models and Factor Models Conflict: Maximizing Internal Consistency James M. Graham, Ph.D. Western Washington University ABSTRACT: The.
Conny’s Office Hours will now be by APPOINTMENT ONLY. Please her at if you would like to meet with.
Education 795 Class Notes Factor Analysis II Note set 7.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
Indexes, Scales, and Typologies
Reliability, Validity, & Scaling
MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent.
Data validation for use in SEM
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Factor Analysis- Path Analysis (SEM Foundations).
An example of a CFA: Evaluating an existing scale.
CHAPTER 6, INDEXES, SCALES, AND TYPOLOGIES
1 Cronbach’s Alpha It is very common in psychological research to collect multiple measures of the same construct. For example, in a questionnaire designed.
Research Methodology Lecture No :24. Recap Lecture In the last lecture we discussed about: Frequencies Bar charts and pie charts Histogram Stem and leaf.
Questionnaire Development: SPSS and Reliability Personality Lab October 8, 2010.
Scales & Indices. Measurement Overview Using multiple indicators to create variables Using multiple indicators to create variables Two-step process: Two-step.
Advanced Correlational Analyses D/RS 1013 Factor Analysis.
Factor Analysis Revealing the correlational structure among variables Understanding & Reducing Complexity.
Learning Objective Chapter 9 The Concept of Measurement and Attitude Scales Copyright © 2000 South-Western College Publishing Co. CHAPTER nine The Concept.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 12 Making Sense of Advanced Statistical.
Chapter 2: Behavioral Variability and Research Variability and Research 1. Behavioral science involves the study of variability in behavior how and why.
The Practice of Social Research Chapter 6 – Indexes, Scales, and Typologies.
Measurement Theory in Marketing Research. Measurement What is measurement?  Assignment of numerals to objects to represent quantities of attributes Don’t.
Indexes Anthony Sealey University of Toronto This material is distributed under an Attribution-NonCommercial-ShareAlike 3.0 Unported Creative Commons License,
Brian Lukoff Stanford University October 13, 2006.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Applied Quantitative Analysis and Practices LECTURE#17 By Dr. Osman Sadiq Paracha.
Exploratory Factor Analysis
Spearman Rho Correlation
Reliability Analysis.
Product Reliability Measuring
Analysis of Survey Results
Measurement: Part 1.
CHAPTER 5 MEASUREMENT CONCEPTS © 2007 The McGraw-Hill Companies, Inc.
assessing scale reliability
Quick and Easy Alpha Reliability
Measuring Social Life: How Many? How Much? What Type?
Making Sense of Advanced Statistical Procedures in Research Articles
Calculating Reliability of Quantitative Measures
Questionnaire Reliability
Reliability, validity, and scaling
Spearman Rho Correlation
Measurement: Part 1.
Chapter 6 Indexes, Scales, And Typologies
RESEARCH METHODS Lecture 18
Asist. Prof. Dr. Duygu FIRAT Asist. Prof.. Dr. Şenol HACIEFENDİOĞLU
INDEXES, SCALES, & TYPOLOGIES
Week 11 Slides.
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Reliability Analysis.
Now What?: I’ve Found Nothing
Reliability, the Properties of Random Errors, and Composite Scores
Psychological Measurement: Reliability and the Properties of Random Errors The last two lectures were concerned with some basics of psychological measurement:
Seasonal Forecasting Using the Climate Predictability Tool
Measurement: Part 1.
Data validation for use in SEM
CONCEPTS TO BE INCLUDED
Chapter 6 Indexes, Scales, and Typologies
University of Warwick, Department of Sociology, 2014/15 SO 201: SSAASS (Surveys and Statistics) (Richard Lampard) Index Construction (Week 13)
Multitrait Scaling and IRT: Part I
Evaluating Multi-item Scales
Presentation transcript:

Scales & Indices

Measurement Overview Using multiple indicators to create variables Two-step process: 1. Which items go together to measure which variables Factor Analysis 2. Evaluating the reliability of multi-item scales Cronbach’s Alpha

Factor Analysis Starts with a group of similar indicators (survey items) Sorts items based on patterns of inter-item similarities I.e., which items are correlated (which ones group together) Items that group together share some underlying common underlying factor Procedure is based on inter-item correlations Correlation: Measure of similarity between two variables Varies between 1 and -1

Stages in Factor Analysis Extraction How the computer searches for patterns Rotation Mathematical manipulation of patterns Whether the computer produces correlated or uncorrelated factors

Concept measurement example: Research on effects of TV news coverage of social protest Subjects shown one of three TV news stories about an anarchist protest: 1. Extremely critical 2. Highly critical 3. Moderately critical Respond to questionnaire Examined differences between exposure groups

Example of Factor Analysis Started with 28 items measuring attitudes Factor analysis reduces to underlying factors…

Remove

Remove

Remove

Five Factors 1. Protest rights 2. Police hostility 3. Protest utility 4. Blame the protesters 5. Anti-violence

1. Support for Protest Rights A. Protesters have a right to protest B. Protesters should not be allowed to protest in public places (reverse coded) C. Protesters have a right to be heard

2. Hostility the Police A. Police were out of line B. Police used excessive force C. Police were violent

3. Utility of Protest A. Protesters offer new insights B. It’s important to listen to protesters C. Protesters brought issues to my attention

4. Blame the Protesters A. Protesters initiated the conflict B. The protesters were disrespectful C. Protest was ineffective on politicians

5. Opposition to Protest Violence A. I feel sorry for the police because of the way they were treated by the protesters B. The protesters were violent

Combining items into a scale Summative scale Factor scores

Summative scales Adding items or taking the mean E.g.,: Compute scale = sum.1(var1,var2,var3) Compute scale = mean.1(var1,var2,var3) Weights each item equally

Factor scores Uses factor loadings from the factor matrix to weight the items Heavier weighting to items that are more central to the factor Use save command when running factor analysis (under “scores”: “save as variables” New variables with values for each case saved in data file for each factor

Cronbach’s Alpha Assessing reliability of a multi-item scale Based on the average inter-item correlation Weighted by the number of items in the scale Measures internal consistency (unidimensionality) Are all the items measuring the same thing? If so, they should all be highly inter-correlated

Cronbach’s Alpha Formula: A = N * r [1+ (N –1)r] N = number of items in the scale r = average inter-item correlation

Acceptable alpha for a scale Ideally, alpha > .80 Some journals accept > .70 Low alpha means either: 1. Scale is not reliable (items have lots of error) 2. Items could measure two different things Alpha if item deleted can help identify a bad item More than one bad item could be an indicator that there are items that measure a different concept