Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.

Slides:



Advertisements
Similar presentations
Chapter 9: Simple Regression Continued
Advertisements

1 Regression as Moment Structure. 2 Regression Equation Y =  X + v Observable Variables Y z = X Moment matrix  YY  YX  =  YX  XX Moment structure.
Managerial Economics in a Global Economy
Topic 12: Multiple Linear Regression
SEM PURPOSE Model phenomena from observed or theoretical stances
A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Structural Equation Modeling Using Mplus Chongming Yang Research Support Center FHSS College.
General Structural Equation (LISREL) Models
Structural Equation Modeling
Statistical Techniques I EXST7005 Simple Linear Regression.
Linear regression models
Ch11 Curve Fitting Dr. Deshi Ye
Cal State Northridge  320 Andrew Ainsworth PhD Regression.
Structural Equation Modeling
Variance and covariance M contains the mean Sums of squares General additive models.
Common Factor Analysis “World View” of PC vs. CF Choosing between PC and CF PAF -- most common kind of CF Communality & Communality Estimation Common Factor.
CORRELATION AND SIMPLE LINEAR REGRESSION - Revisited Ref: Cohen, Cohen, West, & Aiken (2003), ch. 2.
“Ghost Chasing”: Demystifying Latent Variables and SEM
Structural Equation Modeling
Chapter 11 Multiple Regression.
Correlation 1. Correlation - degree to which variables are associated or covary. (Changes in the value of one tends to be associated with changes in the.
Lorelei Howard and Nick Wright MfD 2008
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Chapter 12 Section 1 Inference for Linear Regression.
Discriminant Analysis Testing latent variables as predictors of groups.
Variance and covariance Sums of squares General linear models.
Lecture 5 Correlation and Regression
Structural Equation Models – Path Analysis
Correlation and Regression
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Introduction to Linear Regression and Correlation Analysis
Elements of Multiple Regression Analysis: Two Independent Variables Yong Sept
Structural Equation Modeling 3 Psy 524 Andrew Ainsworth.
Soc 3306a Lecture 8: Multivariate 1 Using Multiple Regression and Path Analysis to Model Causality.
Section #6 November 13 th 2009 Regression. First, Review Scatter Plots A scatter plot (x, y) x y A scatter plot is a graph of the ordered pairs (x, y)
Introduction to Regression Analysis. Two Purposes Explanation –Explain (or account for) the variance in a variable (e.g., explain why children’s test.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Slide 10.1 Structural Equation Models MathematicalMarketing Chapter 10 Structural Equation Models In This Chapter We Will Cover The theme of this chapter.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
10 SEM is Based on the Analysis of Covariances! Why?Analysis of correlations represents loss of information. A B r = 0.86r = 0.50 illustration.
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
Multiple Linear Regression. Purpose To analyze the relationship between a single dependent variable and several independent variables.
Estimation Kline Chapter 7 (skip , appendices)
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Correlation and Regression Basic Concepts. An Example We can hypothesize that the value of a house increases as its size increases. Said differently,
© Copyright McGraw-Hill Correlation and Regression CHAPTER 10.
Chapter 13 Multiple Regression
G Lecture 81 Comparing Measurement Models across Groups Reducing Bias with Hybrid Models Setting the Scale of Latent Variables Thinking about Hybrid.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Environmental Modeling Basic Testing Methods - Statistics III.
Estimating and Testing Hypotheses about Means James G. Anderson, Ph.D. Purdue University.
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Estimation Kline Chapter 7 (skip , appendices)
ALISON BOWLING CONFIRMATORY FACTOR ANALYSIS. REVIEW OF EFA Exploratory Factor Analysis (EFA) Explores the data All measured variables are related to every.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Assumptions of Multiple Regression 1. Form of Relationship: –linear vs nonlinear –Main effects vs interaction effects 2. All relevant variables present.
Université d’Ottawa / University of Ottawa 2001 Bio 8100s Applied Multivariate Biostatistics L11.1 Lecture 11: Canonical correlation analysis (CANCOR)
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 18 Multivariate Statistics.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Chapter 14 Introduction to Regression Analysis. Objectives Regression Analysis Uses of Regression Analysis Method of Least Squares Difference between.
The SweSAT Vocabulary (word): understanding of words and concepts. Data Sufficiency (ds): numerical reasoning ability. Reading Comprehension (read): Swedish.
The “Big Picture” (from Heath 1995). Simple Linear Regression.
Linear Regression Essentials Line Basics y = mx + b vs. Definitions
CJT 765: Structural Equation Modeling
Multiple Regression.
CHAPTER 29: Multiple Regression*
Correlation and Regression
Structural Equation Modeling
Structural Equation Modeling
Presentation transcript:

Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth

Covariance Algebra  Underlying parameters in SEM Regression Coefficients Variances and Covariances  A hypothesized model is used to estimate these parameters for the population (assuming the model is true) The parameter estimates are used to create a hypothesized variance/covariance matrix The sample VC matrix is compared to the estimated VC

Covariance Algebra  Three basic rules of covariance algebra COV(c, X 1 ) = 0 COV(cX 1, X 2 ) = c * COV(X 1, X 2 ) COV(X 1 + X 2, X 3 ) = COV(X 1, X 3 ) + COV(X 2, X 3 )  Reminder: Regular old fashioned regression Y = bX + a

Example Model

Covariance Algebra  In covariance structure intercept is not used  Given the example model here are the equations Y 1 = 11 X 1 +  1  Here Y1 is predicted by X1 and e1 only  X1 is exogenous so  is used to indicate the weight Y 2 = 21 Y 1 +  21 X 1 +  2  Y2 is predicted by X1 and Y1 (plus error)  Y1 is endogenous so the weight is indicated by   The two different weights help to indicate what type of relationship (DV, IV or DV, DV)

Covariance Algebra Estimated Covariance given the model  Cov(X 1,Y 1 ) the simple path Since Y 1 = 11 X 1 +  1 we can substitute COV(X 1,Y 1 ) = Cov(X 1,  11 X 1 +  1 ) Using rule 3 we distribute:  COV(X 1,  11 X 1 +  1 ) = Cov(X 1,  11 X 1 ) + (X 1,  1 )  By definition of regression (X 1,  1 ) = 0  So this simplifies to: Cov(X 1,  11 X 1 +  1 ) = Cov(X 1,  11 X 1 ) Using rule 2 we can pull  11 out:  COV(X 1,Y 1 ) =  11 Cov(X 1, X 1 )  COV(X 1, X 1 ) is the variance of X1  So, Cov(X 1,Y 1 ) =  11  x1x1

Covariance Algebra Estimated Covariance given the model  COV(Y 1,Y 2 ) the complex path Substituting the equations in for Y1 and Y2  COV(Y 1,Y 2 ) = COV( 11 X 1 +  1,  21 Y 1 +  21 X 1 +  2 ) Distributing all of the pieces  COV(Y 1,Y 2 ) = COV( 11 X 1,  21 Y 1 ) + COV( 11 X 1,  21 X 1 ) + COV( 11 X 1,  2 ) + COV( 1,  21 Y 1 ) + COV( 1,  21 X 1 ) + COV( 1,  2 ) Nothing should correlate with e so they all drop out  COV(Y 1,Y 2 ) = COV( 11 X 1,  21 Y 1 ) + COV( 11 X 1,  21 X 1 )  Rearranging: COV(Y 1,Y 2 ) =  11  21 COV(X 1,Y 1 ) +  11  21 COV(X 1,X 1 ) COV(Y 1,Y 2 ) =  11  21  x1y1 +  11  21  x1x1

Back to the path model  COV(X 1,Y 1 ) =  11  2 x1  COV(Y 1,Y 2 ) =  11  21  x1y1 +  11  21   X1

Example

SEM models  All of the relationships in the model are translatable into equations  Analysis in SEM is preceded by specifying a model like in the previous slide  This model is used to create an estimated covariance matrix

SEM models  The goal is to specify a model with an estimated covariance matrix that is not significantly different from the sample covariance matrix  CFA differs from EFA in that the difference can be tested using a Chi-square test If ML methods are used in EFA a chi-square test can be estimated as well

Model Specification  Bentler-Weeks model Matrices   – Beta matrix, matrix of regression coefficients of DVs predicting other DVs   – Gamma matrix, matrix of regression coefficients of DVs predicted by IVs   – phi matrix, matrix of covariances among the IVs  eta matrix, vector of DVs  xi matrix, vector of IVs Bentler-Weeks regression model

Model Specification  Bentler-Weeks model Only independent variables have covariances (phi matrix)  This includes everything with a single headed arrow pointing away from it (e.g. E’s, F’s, V’s, D’s, etc.) The estimated parameters are in the  and  matrices

Model Specification Bentler-Weeks model matrix form

Diagram again

Model Specification  Phi Matrix It is here that other covariances can be specified

Model Estimation  Model estimation in SEM requires start values There are methods for generating good start values Good means less iterations needed to estimate the model Just rely on EQS or other programs to generate them for you (they do a pretty good job)

Model Estimation These start values are used in the first round of model estimation by inserting them into the Bentler-Weeks model matrices (indicated by a “hat”) in place of the *s Using EQS we get start values for the example

Model Estimation

 Selection Matrices (G)  These are used to pull apart variables to use them in matrix equations to estimate the covariances  So that Y = G y *  and Y is only measured dependent variables

Model Estimation  G x = [ ] so that: X = G x *  and X is (are) the measured variable(s)  Rewriting the matrix equation to solve for  we get: =(I-) -1  This expresses the DVs as linear combinations of the IVs

Model Estimation  The estimated population covariance matrix for the DVs is found by:

Model Estimation  Estimated covariance matrix between DVs and IVs

Model Estimation  The covariance(s) between IVs is estimated by :

Model Estimation  Programs like EQS usually estimate all parameters simultaneously giving an estimated covariance matrix sigma-hat  S is the sample covariance matrix  The residual matrix is found by subtracting sigma-hat from S  This whole process is iterative so that after the first iteration the values of the hat matrices output are input as new starting values

Model Estimation  Iterations continue until the function (usually ML) is minimized, this is called convergence  After five iterations the residual matrix in the example is:

Model Estimation  When the model converges you get estimates of all of the parameters and a converged residual matrix

Model Estimation

Model Evaluation   2 is based in the function minimum (from ML estimation) when the model converges The minimum value is multiplied by N – 1 From EQS: (.08924)(99) = The DFs are calculated as the difference between the number of data points (p(p + 1)/2) and the number of parameters estimated In the example 5(6)/2 = 15, and there are 11 estimates leaving 4 degrees of freedom

Model Estimation  2 (4) = 8.835, p =.065 The goal in SEM is to get a non-significant chi- square because that means there is little difference between the hypothesized and sample covariance matrices  Even with non-significant model you need to test significance of predictors Each parameter is divided by its SE to get a Z- score which can be evaluated SE values are best left to EQS to estimate

Final Model  Final example model with (unstandardized) and standardized values