ALISON BOWLING CONFIRMATORY FACTOR ANALYSIS. REVIEW OF EFA Exploratory Factor Analysis (EFA) Explores the data All measured variables are related to every.

Slides:



Advertisements
Similar presentations
Writing up results from Structural Equation Models
Advertisements

1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Structural Equation Modeling. What is SEM Swiss Army Knife of Statistics Can replicate virtually any model from “canned” stats packages (some limitations.
SEM PURPOSE Model phenomena from observed or theoretical stances
Structural Equation Modeling Using Mplus Chongming Yang Research Support Center FHSS College.
Structural Equation Modeling
Confirmatory Factor Analysis
Confirmatory factor analysis GHQ 12. From Shevlin/Adamson 2005:
Confirmatory Factor Analysis
SOC 681 James G. Anderson, PhD
Structure Mediation Structural Equation Modeling.
Structural Equation Modeling
Path Analysis SPSS/AMOS
Psychology 202b Advanced Psychological Statistics, II April 5, 2011.
Multivariate Data Analysis Chapter 11 - Structural Equation Modeling.
Structural Equation Modeling
Factor Analysis Ulf H. Olsson Professor of Statistics.
Factor Analysis Ulf H. Olsson Professor of Statistics.
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Stages in Structural Equation Modeling
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Multiple Sample Models James G. Anderson, Ph.D. Purdue University.
Confirmatory factor analysis
Introduction to CFA. LEARNING OBJECTIVES: Upon completing this chapter, you should be able to do the following: Distinguish between exploratory factor.
Structural Equation Modeling 3 Psy 524 Andrew Ainsworth.
SEM Analysis SPSS/AMOS
Kayla Jordan D. Wayne Mitchell RStats Institute Missouri State University.
Structural Equation Modeling
Confirmatory Factor Analysis Psych 818 DeShon. Purpose ● Takes factor analysis a few steps further. ● Impose theoretically interesting constraints on.
Structural Equation Modeling (SEM) With Latent Variables James G. Anderson, Ph.D. Purdue University.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
1 Variable selection for factor analysis and structural equation models Yutaka Kano & Akira Harada Osaka University International Symposium on Structural.
Bió Bió 2007 The Use of Structural Equation Modeling in Business Suzanne Altobello Nasco, Ph.D. Assistant Professor of Marketing Southern Illinois University.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
ROB CRIBBIE QUANTITATIVE METHODS PROGRAM – DEPARTMENT OF PSYCHOLOGY COORDINATOR - STATISTICAL CONSULTING SERVICE COURSE MATERIALS AVAILABLE AT:
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
1 General Structural Equations (LISREL) Week 1 #4.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
CJT 765: Structural Equation Modeling Class 12: Wrap Up: Latent Growth Models, Pitfalls, Critique and Future Directions for SEM.
Assessing Hypothesis Testing Fit Indices
Advanced Statistics Factor Analysis, II. Last lecture 1. What causes what, ξ → Xs, Xs→ ξ ? 2. Do we explore the relation of Xs to ξs, or do we test (try.
Measurement Models: Identification and Estimation James G. Anderson, Ph.D. Purdue University.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Environmental Modeling Basic Testing Methods - Statistics III.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Assessing Hypothesis Testing Fit Indices Kline Chapter 8 (Stop at 210) Byrne page
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
ALISON BOWLING STRUCTURAL EQUATION MODELLING. WHAT IS SEM? Structural equation modelling is a collection of statistical techniques that allow a set of.
Examples. Path Model 1 Simple mediation model. Much of the influence of Family Background (SES) is indirect.
The general structural equation model with latent variates Hans Baumgartner Penn State University.
Evaluation of structural equation models Hans Baumgartner Penn State University.
MEGN 537 – Probabilistic Biomechanics Ch.5 – Determining Distributions and Parameters from Observed Data Anthony J Petrella, PhD.
Chapter 17 STRUCTURAL EQUATION MODELING. Structural Equation Modeling (SEM)  Relatively new statistical technique used to test theoretical or causal.
The SweSAT Vocabulary (word): understanding of words and concepts. Data Sufficiency (ds): numerical reasoning ability. Reading Comprehension (read): Swedish.
CFA: Basics Byrne Chapter 3 Brown Chapter 3 (40-53)
Advanced Statistical Methods: Continuous Variables
Structural Equation Modeling using MPlus
Chapter 15 Confirmatory Factor Analysis
Correlation, Regression & Nested Models
CJT 765: Structural Equation Modeling
Structural Equation Modeling
Structural Equation Modeling
Confirmatory Factor Analysis
SOC 681 – Causal Models with Directly Observed Variables
Structural Equation Modeling (SEM) With Latent Variables
SEM evaluation and rules
Structural Equation Modeling
Presentation transcript:

ALISON BOWLING CONFIRMATORY FACTOR ANALYSIS

REVIEW OF EFA Exploratory Factor Analysis (EFA) Explores the data All measured variables are related to every factor by a factor loading estimate. If each measured variable loads highly (>.4) on only one factor, we have simple structure. Factors are derived from statistical results, not from theory. Factors are named only after the analysis We do not know initially how many factors there are or which variables belong to which constructs.

CFA AND CONSTRUCT VALIDITY Construct validity is the extend to which a set of measured items actually reflect the theoretical latent construct they are designed to measure CFA enables us to assess the construct validity of a proposed measurement theory.

INTRODUCTION TO CFA With CFA, the researcher must specify both the number of factors that exist within a set of variables and which factor each variable will load highly on before the results can be computed. Each variable loads on only one factor Factors are correlated

TERMINOLGY Latent variable Unobserved variable = factor Displayed as a circle in model diagram Observed variables Variables in the data set Displayed as rectangles Exogenous variables Synonymous with IVs Do not have arrows pointing to them Endogenous variables Synonymous with DVs Have arrows pointing to them. Have error variances

PATH DIAGRAM Mediation model er1 and er2 are latent (not measured) Emotcope, coghard and ghq are observed variables Emotcope and ghq are endogenous Have arrows pointing to them Have error variances The lines represent predicted relationships

INFORMATION FOR CFA

TESTS OF GOODNESS OF FIT Reproduced covariance matrix is constructed after estimation of the parameters. This may be compared with the input matrix. Tests of Goodness of Fit compare these two matrices. Likelihood ratio  2 Very sensitive: not terribly useful Goodness-of-fit indices (GFI); higher the better Root mean square error of approximation (RMSEA) OK Incremental fit indices (TLI, etc) >.9 AIC (can be used to compare models)

SIGNIFICANCE OF PARAMETERS Each of the regression weights and other parameters estimated has a CR test of significance. This is distributed as z. Therefore CR > ±1.96 is significant

MODEL BUILDING Error terms All endogenous variables have error variances associated with them These represent errors of measurement (observed variables) OR errors of prediction (latent variables) Fixing parameters To avoid having more parameters than data points (under- identified) some of the parameters need to be fixed Regression weights of the error terms are fixed to 1. Factors are unobserved -> have no scale One of the indicator variables for each factor is usually fixed to 1.

EXAMPLE (TABACHNICK AND FIDELL) CFA of the WISC. 11 subtests, with two factors Verbal : (information, comprehension, arithmetic, similarities, vocabulary, digit span) Performance : (picture completion, picture arrangement, block design, object assembly, coding). Does a 2 factor model with simple-structure fit the data? Is there a significant covariance between Verbal and Performance factors? Datafile: wiscsem.sav

MODEL SPECIFICATION Data points = (11 x12) /2 = 66 Parameters to estimate = 1 covariance 11 regression weights 11 variances = 23 df = 66 – 23 = 43

ANALYSIS PROPERTIES

Model shows correlation between Verbal and Performance IQ of.59 Each of the standardised regression coefficients (loadings) of the variables on the two factors. OUTPUT: STANDARDISED ESTIMATES

NOTES FOR MODEL Computation of degrees of freedom (Default model) Number of distinct sample moments: 66 Number of distinct parameters to be estimated: 23 Degrees of freedom ( ): 43 Result (Default model) Minimum was achieved Chi-square = Degrees of freedom = 43 Probability level =.005

MODEL FIT Model NFI Delta 1 RFI rho1 IFI Delta2 TLI rho2CFI Default model Saturated model1.000 Independence model.000 ModelRMSEALO 90HI 90PCLOSE Default model Independence model In general the model tits well. Can it be improved?

IMPROVING MODEL FIT 1.Could there be an additional path in the model? 2.Could a more parsimonious model be obtained by removing coding from the model?

ADDING A PATH Check Modification indices Add a path in which performance predicts comprehension (er2) M.I.Par Change er11 er er11 er er2 Performance er2 er er3 er er5 er

UPDATED MODEL

MODEL FIT Number of distinct sample moments: 55 Number of distinct parameters to be estimated: 22 Degrees of freedom ( ):33 Chi-square = , df = 33 Probability level =.079 Chi-square is now non-significant. Other indices have also improved: RMSEA.06 ->.046

MODEL COMPARISON We can compare nested models (initial model with model including Performance -> Comprehension by testing the difference between Chi-square values. Initial chi-square = , df = 43 Chi-square with extra path included = , df = 42 Chi-squared difference = 9.94, df = 1, p <.01 Adding the extra path significantly improves model fit.

MODEL COMPARISON : AIC We can use AIC to compare non-nested models. When we delete Coding, the new model is not nested, as we have changed the data from the initial model. AIC for initial model = AIC for final model = The lower the value of AIC, the better the model fit.

CAVEAT The model modifications are post hoc May be due to chance Ideally should be cross-validated with a new sample.