Estimation Kline Chapter 7 (skip 160-176, appendices)

Slides:



Advertisements
Similar presentations
Structural Equation Modeling. What is SEM Swiss Army Knife of Statistics Can replicate virtually any model from “canned” stats packages (some limitations.
Advertisements

A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Section 10-3 Regression.
Kin 304 Regression Linear Regression Least Sum of Squares
Structural Equation Modeling
1 ESTIMATION, TESTING, ASSESSMENT OF FIT. 2 Estimation How do we fit  (  )? –Choose  so that the reproduced   (  ), is as close as possible to.
A Short Introduction to Curve Fitting and Regression by Brad Morantz
Structural Equation Modeling
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 4: Mathematical Tools for Econometrics Statistical Appendix (Chapter 3.1–3.2)
Chapter 10 Simple Regression.
Common Factor Analysis “World View” of PC vs. CF Choosing between PC and CF PAF -- most common kind of CF Communality & Communality Estimation Common Factor.
Copyright (c) Bani K. Mallick1 STAT 651 Lecture #18.
The General LISREL MODEL and Non-normality Ulf H. Olsson Professor of Statistics.
Multivariate Data Analysis Chapter 11 - Structural Equation Modeling.
Factor Analysis Ulf H. Olsson Professor of Statistics.
The General LISREL MODEL and Non-normality Ulf H. Olsson Professor of Statistics.
The General (LISREL) SEM model Ulf H. Olsson Professor of statistics.
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Simple Linear Regression Analysis
Statistical hypothesis testing – Inferential statistics II. Testing for associations.
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Statistics for the Social Sciences Psychology 340 Fall 2013 Tuesday, November 19 Chi-Squared Test of Independence.
Confirmatory factor analysis
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
1 MULTI VARIATE VARIABLE n-th OBJECT m-th VARIABLE.
Structural Equation Modeling 3 Psy 524 Andrew Ainsworth.
Discriminant Function Analysis Basics Psy524 Andrew Ainsworth.
2 nd Order CFA Byrne Chapter 5. 2 nd Order Models The idea of a 2 nd order model (sometimes called a bi-factor model) is: – You have some latent variables.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Ch4 Describing Relationships Between Variables. Pressure.
Statistics for the Social Sciences Psychology 340 Fall 2013 Correlation and Regression.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Review of Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Business Statistics: A Decision-Making Approach, 6e © 2005 Prentice-Hall, Inc. Chap 13-1 Introduction to Regression Analysis Regression analysis is used.
Multivariate Statistics Confirmatory Factor Analysis I W. M. van der Veld University of Amsterdam.
Latent Growth Modeling Byrne Chapter 11. Latent Growth Modeling Measuring change over repeated time measurements – Gives you more information than a repeated.
Measurement Models: Identification and Estimation James G. Anderson, Ph.D. Purdue University.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Environmental Modeling Basic Testing Methods - Statistics III.
Chapter 22: Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Regression Analysis: Part 2 Inference Dummies / Interactions Multicollinearity / Heteroscedasticity Residual Analysis / Outliers.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
8-1 MGMG 522 : Session #8 Heteroskedasticity (Ch. 10)
Estimation Kline Chapter 7 (skip , appendices)
Tutorial I: Missing Value Analysis
CFA Model Revision Byrne Chapter 4 Brown Chapter 5.
 Seeks to determine group membership from predictor variables ◦ Given group membership, how many people can we correctly classify?
CJT 765: Structural Equation Modeling Class 9: Putting it All Together.
Quantitative Methods. Bivariate Regression (OLS) We’ll start with OLS regression. Stands for  Ordinary Least Squares Regression. Relatively basic multivariate.
Bootstrapping James G. Anderson, Ph.D. Purdue University.
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
Estimating standard error using bootstrap
Structural Equation Modeling using MPlus
Kin 304 Regression Linear Regression Least Sum of Squares
Correlation, Regression & Nested Models
CJT 765: Structural Equation Modeling
Maximum Likelihood & Missing data
BPK 304W Regression Linear Regression Least Sum of Squares
Multiple Regression A curvilinear relationship between one variable and the values of two or more other independent variables. Y = intercept + (slope1.
OVERVIEW OF LINEAR MODELS
Structural Equation Modeling
OVERVIEW OF LINEAR MODELS
Causal Relationships with measurement error in the data
SEM: Step by Step In AMOS and Mplus.
Structural Equation Modeling
Presentation transcript:

Estimation Kline Chapter 7 (skip , appendices)

Estimation Estimation = the math that goes on behind the scenes to give you parameter numbers Common types: – Maximum Likelihood (ML) – Asymptotically Distribution Free (ADF) – Unweighted Least Squares (ULS) – Two stage least squares (TSLS)

Max Like Estimates are the ones that maximize the likelihood that the data were drawn from the population – Seems very abstract no?

Max Like Normal theory method – Multivariate normality is assumed to use ML – Therefore it’s important to check your normality assumption – other types of estimations may work better for non-normal DVs (endogenous variables)

Max Like Full information method – estimates are calculated all at the same time – Partial information methods calculate part, then use those to calculate the rest

Max Like Fit function – the relationship between the sample covariances and estimated covariances – We want our fit function to be: High if we are measuring how much they match (goodness of fit) Low if we are measuring how much they mismatch (residuals)

Max Like ML is an iterative process – The computer calculates a possible start solution, and then runs several times to create the largest ML match. Start values – usually generated by the computer, but you can enter values if you are having problems converging to a solution

Max Like Inadmissable solutions – you get numbers in your output but clearly parameters are not correct – You will get a warning on the notes for model page

Max Like Heywood cases – Parameter estimates are illogical (huge) – Negative variance estimates Just variances, covariances can be negative – Correlation estimates over 1 (SMCs)

Max Like What’s happening? – Specification error – Nonidentification – Outliers – Small samples – Two indicators per latent (more is always better) – Bad start values (especially for errors) – Very low or high correlations (empirical under identification)

Max Like Scale free/invariant – Means that if you change the scale with a linear transform, the model is still the same – Assumes unstandardized start variables Otherwise you’d have standardized standardized estimates, weird.

Max Like Interpretation of Estimates – Loadings/path coefficients – just like regression coefficients Remember you can click the estimate to get help! – Error variances tell you how much variance is not accounted for by the model (so you want to be small) The reverse is SMCs – tell you how much variance

Other Methods For continuous variables with normal distributions – Generalized Least Squares (GLS) – Unweighted Least Squares (ULS) – Fully Weighted Least Squares (WLS)

Other Methods ULS – Pros: Does not require positive definite matrices Robust initial estimates – Cons: Not scale free Not as efficient All variables in the same scale

Other Methods GLS – Pros: Scale free Faster computation time – Cons: Not commonly used? If this runs so does ML.

Other Methods Nonnormal data – In ML, estimates might be accurate, but SEs will be large (eek). – Model fit tends to be overestimated

Other Methods Corrected normal method – uses ML but then adjusts the SEs to be normal (robust SE). Satorra-Bentler statistic – Adjusts the chi square value from standard ML by the degree of kurtosis/skew – Corrected model test statistic

Other Methods Bootstrapping! – We will cover this section later.

Other Methods Asymptotically distribution free – ADF – (in the book he calls it arbitrary) – Estimates the skew/kurtosis in the data to generate a model – May not converge because of number of parameters to estimate – I’ve always found this to not be helpful.

Other Methods Non continuous data – You can estimate some with non-continuous data, but you are better off switching to Mplus, which has robust (and automatic!) estimators for categorical data. – (so blah on page , as you can’t really do this in Amos easily).

Analysis Properties Click on the abacus with buttons button to get started

Estimation You can pick the type of estimation on the left. You can pick estimate means and intercepts on the right (must select for multigroup and models with missing data). Look! You can turn off the output for the independence and saturated models.

Output Here you want to select (pretty much always): – Standardized estimates – Multiple correlations – Modification indices (won’t run with estimate means and intercepts on). – The rest of the options we’ll talk about as we go.

Entering Correlation Matrices If you have means, the last row is label mean.

Teacher Example

Mother example

Exercise Example: Class Assignment