Estimation Kline Chapter 7 (skip 160-176, appendices)

Slides:



Advertisements
Similar presentations
Autocorrelation and Heteroskedasticity
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Structural Equation Modeling. What is SEM Swiss Army Knife of Statistics Can replicate virtually any model from “canned” stats packages (some limitations.
Section 10-3 Regression.
Kin 304 Regression Linear Regression Least Sum of Squares
Structural Equation Modeling
1 ESTIMATION, TESTING, ASSESSMENT OF FIT. 2 Estimation How do we fit  (  )? –Choose  so that the reproduced   (  ), is as close as possible to.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
A Short Introduction to Curve Fitting and Regression by Brad Morantz
Structural Equation Modeling
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. Lecture 4: Mathematical Tools for Econometrics Statistical Appendix (Chapter 3.1–3.2)
Different chi-squares Ulf H. Olsson Professor of Statistics.
Chapter 10 Simple Regression.
Common Factor Analysis “World View” of PC vs. CF Choosing between PC and CF PAF -- most common kind of CF Communality & Communality Estimation Common Factor.
The General LISREL MODEL and Non-normality Ulf H. Olsson Professor of Statistics.
Multivariate Data Analysis Chapter 11 - Structural Equation Modeling.
GRA 6020 Multivariate Statistics The Structural Equation Model Ulf H. Olsson Professor of Statistics.
Different chi-squares Ulf H. Olsson Professor of Statistics.
Chapter 11 Multiple Regression.
G Lecture 51 Estimation details Testing Fit Fit indices Arguing for models.
Factor Analysis Ulf H. Olsson Professor of Statistics.
The General LISREL MODEL and Non-normality Ulf H. Olsson Professor of Statistics.
Business Statistics - QBM117 Statistical inference for regression.
The General (LISREL) SEM model Ulf H. Olsson Professor of statistics.
Simple Linear Regression Analysis
Statistical hypothesis testing – Inferential statistics II. Testing for associations.
Structural Equation Modeling Continued: Lecture 2 Psy 524 Ainsworth.
Confirmatory factor analysis
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
1 MULTI VARIATE VARIABLE n-th OBJECT m-th VARIABLE.
Structural Equation Modeling 3 Psy 524 Andrew Ainsworth.
2 nd Order CFA Byrne Chapter 5. 2 nd Order Models The idea of a 2 nd order model (sometimes called a bi-factor model) is: – You have some latent variables.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
ALISON BOWLING THE GENERAL LINEAR MODEL. ALTERNATIVE EXPRESSION OF THE MODEL.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Estimation Kline Chapter 7 (skip , appendices)
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Inference for Regression Chapter 14. Linear Regression We can use least squares regression to estimate the linear relationship between two quantitative.
Factor Analysis Psy 524 Ainsworth. Assumptions Assumes reliable correlations Highly affected by missing data, outlying cases and truncated data Data screening.
Review of Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Multiple Regression Petter Mostad Review: Simple linear regression We define a model where are independent (normally distributed) with equal.
Multivariate Statistics Confirmatory Factor Analysis I W. M. van der Veld University of Amsterdam.
Measurement Models: Identification and Estimation James G. Anderson, Ph.D. Purdue University.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Environmental Modeling Basic Testing Methods - Statistics III.
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Linear Regression Basics III Violating Assumptions Fin250f: Lecture 7.2 Spring 2010 Brooks, chapter 4(skim) 4.1-2, 4.4, 4.5, 4.7,
Regression Analysis: Part 2 Inference Dummies / Interactions Multicollinearity / Heteroscedasticity Residual Analysis / Outliers.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Psychology 202a Advanced Psychological Statistics October 22, 2015.
8-1 MGMG 522 : Session #8 Heteroskedasticity (Ch. 10)
Tutorial I: Missing Value Analysis
CFA Model Revision Byrne Chapter 4 Brown Chapter 5.
 Seeks to determine group membership from predictor variables ◦ Given group membership, how many people can we correctly classify?
CJT 765: Structural Equation Modeling Class 9: Putting it All Together.
Bootstrapping James G. Anderson, Ph.D. Purdue University.
Chapter 17 STRUCTURAL EQUATION MODELING. Structural Equation Modeling (SEM)  Relatively new statistical technique used to test theoretical or causal.
Chapter 4. The Normality Assumption: CLassical Normal Linear Regression Model (CNLRM)
Estimating standard error using bootstrap
Structural Equation Modeling using MPlus
Correlation, Regression & Nested Models
CJT 765: Structural Equation Modeling
Maximum Likelihood & Missing data
STOCHASTIC REGRESSORS AND THE METHOD OF INSTRUMENTAL VARIABLES
BIVARIATE REGRESSION AND CORRELATION
OVERVIEW OF LINEAR MODELS
Structural Equation Modeling
OVERVIEW OF LINEAR MODELS
Causal Relationships with measurement error in the data
Presentation transcript:

Estimation Kline Chapter 7 (skip , appendices)

Estimation Estimation = the math that goes on behind the scenes to give you parameter numbers Common types: – Maximum Likelihood (ML) – Asymptotically Distribution Free (ADF) – Unweighted Least Squares (ULS) – Two stage least squares (TSLS)

Max Like Estimates are the ones that maximize the likelihood that the data were drawn from the population – Seems very abstract no?

Max Like Normal theory method – Multivariate normality is assumed to use ML – Therefore it’s important to check your normality assumption – other types of estimations may work better for non-normal DVs (endogenous variables)

Max Like Full information method – estimates are calculated all at the same time – Partial information methods calculate part of the estimates, then use those to calculate the rest

Max Like Fit function – the relationship between the sample covariances and estimated covariances – We want our fit function to be: High if we are measuring how much they match (goodness of fit) Low if we are measuring how much they mismatch (residuals)

Max Like ML is an iterative process – The computer calculates a possible start solution, and then runs several times to create the largest ML match. Start values – usually generated by the computer, but you can enter values if you are having problems converging to a solution

Max Like Inadmissable solutions – you get numbers in your output but clearly parameters are not correct

Max Like Heywood cases – Parameter estimates are illogical (huge) – Negative variance estimates Just variances, covariances can be negative – Correlation estimates over 1 (SMCs)

Max Like What’s happening? – Specification error – Nonidentification – Outliers – Small samples – Two indicators per latent (more is always better) – Bad start values (especially for errors) – Very low or high correlations (empirical under identification)

Max Like Scale free/invariant – Means that if you change the scale with a linear transform, the model is still the same – Assumes unstandardized start variables Otherwise you’d have standardized standardized estimates, weird.

Max Like Interpretation of Estimates – Loadings/path coefficients – just like regression coefficients – Error variances tell you how much variance is not accounted for by the model (so you want to be small) The reverse is SMCs – tell you how much variance

Other Methods For continuous variables with normal distributions – Generalized Least Squares (GLS) – Unweighted Least Squares (ULS) – Fully Weighted Least Squares (WLS)

Other Methods ULS – Pros: Does not require positive definite matrices Robust initial estimates – Cons: Not scale free Not as efficient All variables in the same scale

Other Methods GLS – Pros: Scale free Faster computation time – Cons: Not commonly used? If this runs so does ML.

Other Methods Nonnormal data – In ML, estimates might be accurate, but SEs will be large (eek). – Model fit tends to be overestimated

Other Methods Corrected normal method – uses ML but then adjusts the SEs to be normal (robust SE). Satorra-Bentler statistic – Adjusts the chi square value from standard ML by the degree of kurtosis/skew – Corrected model test statistic

Other Methods Asymptotically distribution free – ADF – (in the book he calls it arbitrary) – Estimates the skew/kurtosis in the data to generate a model – May not converge because of number of parameters to estimate – I’ve always found this to not be helpful.

Other Methods Non continuous data – You can estimate some with non-continuous data, but you are better off switching to Mplus, which has robust (and automatic!) estimators for categorical data.