MTMM Byrne Chapter 10. MTMM Multi-trait multi-method – Traits – the latent factors you are trying to measure – Methods – the way you are measuring the.

Slides:



Advertisements
Similar presentations
Writing up results This tutorial focuses on writing your results section. Click the next button in the bottom right hand corner to begin. Next QUIT.
Advertisements

SEM PURPOSE Model phenomena from observed or theoretical stances
Structural Equation Modeling
Hypothesis Testing Steps in Hypothesis Testing:
Uncertainty Analysis Using GEM-SA. GEM-SA course - session 42 Outline Setting up the project Running a simple analysis Exercise More complex analyses.
Structural Equation Modeling
The Identification Problem Question: Given a set of observed outcomes and covariates, can you estimate a unique set of parameters ? Answer: In general,
Mr Barton’s Maths Notes
Binary Arithmetic Math For Computers.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
Path Analysis. Figure 1 Exogenous Variables Causally influenced only by variables outside of the model. SES and IQ in Figure 1. The two-headed arrow.
A quadratic equation is a second degree polynomial, usually written in general form: The a, b, and c terms are called the coefficients of the equation,
Item response modeling of paired comparison and ranking data.
Excel Tutorial: Inferential Statistics © Schachman.
Introduction to CFA. LEARNING OBJECTIVES: Upon completing this chapter, you should be able to do the following: Distinguish between exploratory factor.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Structural Equation Modeling 3 Psy 524 Andrew Ainsworth.
One-Factor Experiments Andy Wang CIS 5930 Computer Systems Performance Analysis.
Traditional Method 1 mean, sigma unknown. In a national phone survey conducted in May 2012, adults were asked: Thinking about social issues, would you.
CORRELATION & REGRESSION
Confirmatory Factor Analysis Psych 818 DeShon. Purpose ● Takes factor analysis a few steps further. ● Impose theoretically interesting constraints on.
2 nd Order CFA Byrne Chapter 5. 2 nd Order Models The idea of a 2 nd order model (sometimes called a bi-factor model) is: – You have some latent variables.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
Teams who run this, year E BB T B F TE SS CC Iso Right (start here)
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
Part III Taking Chances for Fun and Profit Chapter 8 Are Your Curves Normal? Probability and Why it Counts.
Estimation Kline Chapter 7 (skip , appendices)
Correlation Chapter 15. A research design reminder >Experimental designs You directly manipulated the independent variable. >Quasi-experimental designs.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Confirmatory Factor Analysis Psych 818 DeShon. Construct Validity: MTMM ● Assessed via convergent and divergent evidence ● Convergent – Measures of the.
Regression Chapter 16. Regression >Builds on Correlation >The difference is a question of prediction versus relation Regression predicts, correlation.
Accuracy Chapter 5.1 Data Screening. Data Screening So, I’ve got all this data…what now? – Please note this is going to deviate from the book a bit and.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Full Structural Models Kline Chapter 10 Brown Chapter 5 ( )
2nd Order CFA Hierarchical Latent Models
Multigroup Models Byrne Chapter 7 Brown Chapter 7.
Assessing Hypothesis Testing Fit Indices
Multigroup Models Beaujean Chapter 4 Brown Chapter 7.
Optimizing Your Computer To Run Faster Using Msconfig Technical Demonstration by: Chris Kilkenny.
Latent Growth Modeling Byrne Chapter 11. Latent Growth Modeling Measuring change over repeated time measurements – Gives you more information than a repeated.
Measurement Models: Identification and Estimation James G. Anderson, Ph.D. Purdue University.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
Assumptions 5.4 Data Screening. Assumptions Parametric tests based on the normal distribution assume: – Independence – Additivity and linearity – Normality.
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire Design and Testing.
Assessing Hypothesis Testing Fit Indices Kline Chapter 8 (Stop at 210) Byrne page
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Item Factor Analysis Item Response Theory Beaujean Chapter 6.
Chapter 13.  Both Principle components analysis (PCA) and Exploratory factor analysis (EFA) are used to understand the underlying patterns in the data.
SEM Model Fit: Introduction David A. Kenny January 12, 2014.
Missing Values C5.2 Data Screening. Missing Data Use the summary function to check out the missing data for your dataset. summary(notypos)
Estimation Kline Chapter 7 (skip , appendices)
ALISON BOWLING CONFIRMATORY FACTOR ANALYSIS. REVIEW OF EFA Exploratory Factor Analysis (EFA) Explores the data All measured variables are related to every.
CFA Model Revision Byrne Chapter 4 Brown Chapter 5.
PHYSICS 2150 LABORATORY Dr. Laurel Mayhew TA: Geruo A Lab Coordinator: Jerry Leigh Lecture 4b February 27, 2007.
CJT 765: Structural Equation Modeling Class 9: Putting it All Together.
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
CFA: Basics Byrne Chapter 3 Brown Chapter 3 (40-53)
Bootstrapping Byrne Chapter 12. Bootstrapping Quick distinction: – Population distribution – Sample distribution – Sampling distribution.
Full Structural Models Byrne Chapter 6 Kline Chapter 10 (stop 288, but the reporting section at the end is good to mark for later)
Advanced Statistical Methods: Continuous Variables
INF397C Introduction to Research in Information Studies Spring, Day 12
EPSY 5245 EPSY 5245 Michael C. Rodriguez
One-Factor Experiments
Testing Causal Hypotheses
Presentation transcript:

MTMM Byrne Chapter 10

MTMM Multi-trait multi-method – Traits – the latent factors you are trying to measure – Methods – the way you are measuring the latent factors

MTMM Why? – Convergent validity – how much different assessments measure the same traits – So you want scale 1 and scale 2 to measure trait 1 equally well

MTMM Why? – Discriminant validity – how much different assessment methods diverge in measurement of different traits – So you want scale 1 and scale 2 to measure trait 1 and trait 2 differently (aka the traits shouldn’t be the same)

MTMM Why? – Method effects – how much scales cause overlap in traits – So you want scale 1 to measure trait 1 and trait 2 differently

Steps The Widaman rules – You are using a set of nested CFAs, so you want to use a chi-square difference test. – (I think you could also probably easily use AIC or ECVI as well, but chi-square is the most common). – You can also use a new rule that we will use extensively for MG models: Change in CFI >.01 is a significant change in fit.

Steps First, check out the new model specification, as it is not a normal CFA. – I really would recommend setting up the model with traits on one side and methods on the other, with the questions down the middle.

Steps First, check out the new model specification, as it is not a normal CFA. – (page 277) As you’ll see we do not have the 1 set for indicator loadings on either side of the model. – You should instead set the variance of each latent to 1. – It’s still probably easiest to use the CFA button, but you will have to switch out the 1 from the indicator to the variance (for identification).

Steps Model 1 – Correlated Traits / Correlated Methods – You will correlate traits with each other. – You will correlate methods with each other. – You will NOT cross correlate methods and traits. So you’ll get the error message saying but wait! Correlations! Just hit proceed.

Steps Model 1 – One of the big concerns/complaints with the traditional MTMM steps is that Model 1 is likely to create a heywood case with a negative error variance. – Mostly because models are so complex – but think a lot of this stuff is correlated (on purpose), so it’s hard to figure out where all that correlation goes.

Steps Model 1 – If you get a Heywood case of negative variance: Set the value to something small and positive (.01) Set the value equal to another small positive parameter.

Comparisons Every model (2-4) is compared to model 1. – Model 1 represents the best case scenario, wherein traits are correlated but not perfectly and they are estimated by the measurements but not perfectly.

Steps Save every model as a different file. Trust me.

Steps Model 2 – No traits / correlated methods – Completely delete the trait latents (leave those indicators alone!). – So now, you are basically testing methods.

Comparisons Model 1 versus Model 2 – Convergent validity – independent measures of the same trait are correlated – You want Model 2 to be worse than model 1. – Model 1 has both traits and methods – model 2 has only methods. If model 2 is good, that means that methods is the best estimate for the data, and traits are not useful (eek). Convergent validity of the traits

Steps Model 3 – Perfectly correlated traits / freely correlated methods – In this model, you have both traits and methods – You will set the covariance arrows (those double headed ones) to 1 Remember, double click on them, set the parameter to 1. – Now we are testing if all the traits are the same.

Comparisons Model 1 versus Model 3 – Discriminant validity – you want traits to measure different things – You want model 3 to be bad. – So you are comparing a freely estimated model (1) to a model where the correlations are set to 1 (model 3). Discriminant validity of the traits

Steps Model 4 – Freely correlated traits / uncorrelated methods – In this model, delete the covariance arrows between methods latents (don’t delete the latent).

Comparisons Model 1 versus Model 4 – Discriminant validity – you want the methods to measure different things (otherwise why use them all?) – So you want freely estimated correlations (model 1) to equal unestimated correlations (setting them to zero, model 4). – You want model 1 = model 4. Discriminant validity of the methods

Parameter Estimates Create yourself a table for this one: For each trait – Items across the top – Methods down the side – Standardized parameters for diagonal (traits) – Standardized parameters down the side (methods)

Parameter Estimates In that table you want: – Convergent validity – the trait loadings are higher than the methods loadings This finding tells you that the trait effects > method effects.

Parameter Estimates Create yourself a table for this one: – Correlation table of traits and methods across the top and down the side – Create little triangles where they match (i.e. only include correlations of traits to traits, methods to methods

Parameter Estimates In the traits table: – Discriminant validity – you want traits correlations to be low (aka they don’t overlap that much). In the methods table: – Discriminant validity – you want the methods correlations to be low

Correlated Uniqueness In this type of model, you assume that the methods effects are the correlations in error terms – (completely separate, she has it listed as model 5 but you wouldn’t make it step 5).

Correlated Uniqueness To set up: – Create a traits side of your model, the same as the first model you created – Instead of creating a methods side with latents, just correlate the errors within methods (not all of them!)

Correlated Uniqueness You do not have anything to compare the model to though … you just use the parameter comparisons explained earlier.

Correlated Uniqueness Convergent validity – check out the standardized estimates for each trait diagonal comparing methods – Like factor loadings want them to be significant (usually above.300)

Correlated Uniqueness Discriminant validity – you will check out the correlation table for the traits (same rules) You will then check out the correlations of the error terms within each method to determine if they are highly correlated (same rules)

Let’s Try It! Brown MTMM happy sad data – See handout for tables/charts – Traits – happy sad – Methods – acl, ad, desc, lkrt