Introduction to CFA. LEARNING OBJECTIVES: Upon completing this chapter, you should be able to do the following: Distinguish between exploratory factor.

Slides:



Advertisements
Similar presentations
Agenda Levels of measurement Measurement reliability Measurement validity Some examples Need for Cognition Horn-honking.
Advertisements

Chapter 8 Flashcards.
22 22 33 33 11 11 X1 X2 X3 X4 X5 X6 X7 X8 X9                    2,1  3,1  3,2 2,1 1,1 3,1 4,2 5,2 6,2 7,3 8,3 9,3.
SEM PURPOSE Model phenomena from observed or theoretical stances
General Structural Equation (LISREL) Models
Structural Equation Modeling
Structural Equation Modeling
Chapter 4 Validity.
Factor Analysis Ulf H. Olsson Professor of Statistics.
Common Factor Analysis “World View” of PC vs. CF Choosing between PC and CF PAF -- most common kind of CF Communality & Communality Estimation Common Factor.
Beginning the Research Design
Multivariate Data Analysis Chapter 11 - Structural Equation Modeling.
When Measurement Models and Factor Models Conflict: Maximizing Internal Consistency James M. Graham, Ph.D. Western Washington University ABSTRACT: The.
“Ghost Chasing”: Demystifying Latent Variables and SEM
Structural Equation Modeling
Education 795 Class Notes Factor Analysis II Note set 7.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Correlation and Regression Analysis
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Multivariate Methods EPSY 5245 Michael C. Rodriguez.
Stages in Structural Equation Modeling
Factor Analysis Psy 524 Ainsworth.
Reliability, Validity, & Scaling
Inference for regression - Simple linear regression
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
Structural Equation Modeling (SEM) With Latent Variables James G. Anderson, Ph.D. Purdue University.
SEM: Testing a Structural Model
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
1 Exploratory & Confirmatory Factor Analysis Alan C. Acock OSU Summer Institute, 2009.
SEM: Confirmatory Factor Analysis. LEARNING OBJECTIVES Upon completing this chapter, you should be able to do the following:  Distinguish between exploratory.
Advanced Correlational Analyses D/RS 1013 Factor Analysis.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Confirmatory Factor Analysis Psych 818 DeShon. Construct Validity: MTMM ● Assessed via convergent and divergent evidence ● Convergent – Measures of the.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Full Structural Models Kline Chapter 10 Brown Chapter 5 ( )
CJT 765: Structural Equation Modeling Class 12: Wrap Up: Latent Growth Models, Pitfalls, Critique and Future Directions for SEM.
CFA: Basics Beaujean Chapter 3. Other readings Kline 9 – a good reference, but lumps this entire section into one chapter.
Research Methodology and Methods of Social Inquiry Nov 8, 2011 Assessing Measurement Reliability & Validity.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
SEM: Basics Byrne Chapter 1 Tabachnick SEM
SEM Basics 2 Byrne Chapter 2 Kline pg 7-15, 50-51, ,
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Applied Quantitative Analysis and Practices
ALISON BOWLING CONFIRMATORY FACTOR ANALYSIS. REVIEW OF EFA Exploratory Factor Analysis (EFA) Explores the data All measured variables are related to every.
Advanced Statistics Factor Analysis, I. Introduction Factor analysis is a statistical technique about the relation between: (a)observed variables (X i.
Applied Quantitative Analysis and Practices LECTURE#19 By Dr. Osman Sadiq Paracha.
Evaluation of structural equation models Hans Baumgartner Penn State University.
Applied Quantitative Analysis and Practices LECTURE#17 By Dr. Osman Sadiq Paracha.
Principal Component Analysis
FACTOR ANALYSIS.  The basic objective of Factor Analysis is data reduction or structure detection.  The purpose of data reduction is to remove redundant.
Chapter 14 EXPLORATORY FACTOR ANALYSIS. Exploratory Factor Analysis  Statistical technique for dealing with multiple variables  Many variables are reduced.
FACTOR ANALYSIS & SPSS. First, let’s check the reliability of the scale Go to Analyze, Scale and Reliability analysis.
Chapter 17 STRUCTURAL EQUATION MODELING. Structural Equation Modeling (SEM)  Relatively new statistical technique used to test theoretical or causal.
© 2009 Pearson Prentice Hall, Salkind. Chapter 5 Measurement, Reliability and Validity.
Descriptive Statistics Report Reliability test Validity test & Summated scale Dr. Peerayuth Charoensukmongkol, ICO NIDA Research Methods in Management.
Chapter 15 Confirmatory Factor Analysis
CJT 765: Structural Equation Modeling
Evaluation of measuring tools: validity
CJT 765: Structural Equation Modeling
Reliability and Validity of Measurement
EPSY 5245 EPSY 5245 Michael C. Rodriguez
Principal Component Analysis
Confirmatory Factor Analysis
Structural Equation Modeling (SEM) With Latent Variables
Causal Relationships with measurement error in the data
Structural Equation Modeling
Presentation transcript:

Introduction to CFA

LEARNING OBJECTIVES: Upon completing this chapter, you should be able to do the following: Distinguish between exploratory factor analysis and confirmatory factor analysis. Assess the construct validity of a measurement model. Know how to represent a measurement model using a path diagram. Understand the basic principles of statistical identification and know some of the primary causes of SEM identification problems. Understand the concept of fit as it applies to measurement models and be able to assess the fit of a confirmatory factor analysis model. Know how SEM can be used to compare results between groups. This includes assessing the cross-validation of a measurement model. SEM – Confirmatory Factor Analysis SEM – Confirmatory Factor Analysis

What is it? What is it? Why use it? Why use it? Confirmatory Factor Analysis Overview

Confirmatory Factor Analysis... is similar to EFA in some respects, but philosophically it is quite different. With CFA, the researcher must specify both the number of factors that exist within a set of variables and which factor each variable will load highly on before results can be computed. So the technique does not assign variables to factors. Instead the researcher must be able to make this assignment before any results can be obtained. SEM is then applied to test the extent to which a researcher’s a-priori pattern of factor loadings represents the actual data. Confirmatory Factor Analysis Defined

Review of and Contrast with Exploratory Factor Analysis EFA (exploratory factor analysis) explores the data and provides the researcher with information about how many factors are needed to best represent the data. With EFA, all measured variables are related to every factor by a factor loading estimate. Simple structure results when each measured variable loads highly on only one factor and has smaller loadings on other factors (i.e., loadings <.4). The distinctive feature of EFA is that the factors are derived from statistical results, not from theory, and so they can only be named after the factor analysis is performed. EFA can be conducted without knowing how many factors really exist or which variables belong with which constructs. In this respect, CFA and EFA are not the same.

CFA and Construct Validity One of the biggest advantages of CFA/SEM is its ability to assess the construct validity of a proposed measurement theory. Construct validity... is the extent to which a set of measured items actually reflect the theoretical latent construct they are designed to measure. Construct validity is made up of four important components: 1.Convergent validity – three approaches: oFactor loadings. oVariance extracted. oReliability. 2.Discriminant validity. 3.Nomological validity. 4.Face validity.

Rules of Thumb Construct Validity: Convergent and Discriminant Validity Standardized loading estimates should be.5 or higher, and ideally.7 or higher. Standardized loading estimates should be.5 or higher, and ideally.7 or higher. VE should be.5 or greater to suggest adequate convergent validity. VE should be.5 or greater to suggest adequate convergent validity. Construct reliability should be.7 or higher to indicate adequate convergence or internal consistency. Construct reliability should be.7 or higher to indicate adequate convergence or internal consistency. VE estimates for two factors also should be greater than the square of the correlation between the two factors to provide evidence of discriminant validity. VE estimates for two factors also should be greater than the square of the correlation between the two factors to provide evidence of discriminant validity.

Confirmatory Factor Analysis Stages Stage 1: Defining Individual Constructs Stage 2: Developing the Overall Measurement Model Stage 3: Designing a Study to Produce Empirical Results Stage 4: Assessing the Measurement Model Validity Stage 5: Specifying the Structural Model Stage 6: Assessing Structural Model Validity Note: CFA involves stages 1 – 4 above. SEM is stages 5 and 6.

Stage 1: Defining Individual Constructs List constructs that will comprise the measurement model. List constructs that will comprise the measurement model. Determine if existing scales/constructs are available or can be modified to test your measurement model. Determine if existing scales/constructs are available or can be modified to test your measurement model. If existing scales/constructs are not available, then develop new scales. If existing scales/constructs are not available, then develop new scales.

Rules of Thumb Defining Individual Constructs All constructs must display adequate construct validity, whether they are new scales or scales taken from previous research. Even previously established scales should be carefully checked for content validity. All constructs must display adequate construct validity, whether they are new scales or scales taken from previous research. Even previously established scales should be carefully checked for content validity. Experts should judge the items’ content for validity in the early stages of scale development. Experts should judge the items’ content for validity in the early stages of scale development. oWhen two items have virtually identical content, one should be dropped. oItems upon which the judges cannot agree should be dropped. A pre-test should be used to purify measures prior to confirmatory testing. A pre-test should be used to purify measures prior to confirmatory testing.

Stage 2: Developing the Overall Measurement Model Key Issues: Unidimensionality. Unidimensionality. Measurement model. Measurement model. Items per construct. Items per construct. oIdentification Reflective vs. formative measurement models. Reflective vs. formative measurement models.

Stage 2: A Measurement Model (and SEM) A SEM diagram commonly has certain standard elements: latents are ellipses, indicators are rectangles, error and residual terms are circles, single- headed arrows are causal relations (note causality goes from a latent to its indicators), and double-headed arrows are correlations between indicators or between exogenous latents. Path coefficient values may be placed on the arrows from latents to indicators, or from one latent to another, or from an error term to an indicator, or from a residual term to a latent. Each endogenous variable (the one 'Dependent variable' in the model below) has an error term, sometimes called a disturbance term or residual error, not to be confused with indicator error, e, associated with each indicator variable. Measurement Model

Rules of Thumb Developing the Overall Measurement Model In standard CFA applications testing a measurement theory, within and between error covariance terms should be fixed at zero and not estimated. In standard CFA applications testing a measurement theory, within and between error covariance terms should be fixed at zero and not estimated. In standard CFA applications testing a measurement theory, all measured variables should be free to load only on one construct. In standard CFA applications testing a measurement theory, all measured variables should be free to load only on one construct. Latent constructs should be indicated by at least three measured variables, preferably four or more. In other words, latent factors should be statistically identified. Latent constructs should be indicated by at least three measured variables, preferably four or more. In other words, latent factors should be statistically identified.

Rules of Thumb Developing the Overall Measurement Model Formative factors are not latent and are not validated as are conventional reflective factors. Internal consistency and reliability are not important. The variables that make up a formative factor should explain the largest portion of variation in the formative construct itself and should relate highly to other constructs that are conceptually related (minimum correlation of.5): Formative factors are not latent and are not validated as are conventional reflective factors. Internal consistency and reliability are not important. The variables that make up a formative factor should explain the largest portion of variation in the formative construct itself and should relate highly to other constructs that are conceptually related (minimum correlation of.5): oFormative factors present greater difficulties with statistical identification. oAdditional variables or constructs must be included along with a formative construct in order to achieve an over-identified model. oA formative factor should be represented by the entire population of items that form it. Therefore, items should not be dropped because of a low loading. oWith reflective models, any item that is not expected to correlate highly with the other indicators of a factor should be deleted.

Rules of Thumb Designing a Study to Provide Empirical Results The ‘scale’ of a latent construct can be set by either: The ‘scale’ of a latent construct can be set by either: oFixing one loading and setting its value to 1, or oFixing the construct variance and setting its value to 1. Congeneric, reflective measurement models in which all constructs have at least three item indicators should be statistically identified. Congeneric, reflective measurement models in which all constructs have at least three item indicators should be statistically identified. The researcher should check for errors in the specification of the measurement model when identification problems are indicated. The researcher should check for errors in the specification of the measurement model when identification problems are indicated. Models with large samples (more than 300) that adhere to the three indicator rule generally do not produce Heywood cases. Models with large samples (more than 300) that adhere to the three indicator rule generally do not produce Heywood cases.

Key Issues: Measurement scales in CFA. Measurement scales in CFA. SEM/CFA and sampling. SEM/CFA and sampling. Specifying the model: Specifying the model: oWhich indicators belong to each construct? oSetting the scale to “1” for one indicator on each construct. Issues in identification. Issues in identification. Problems in estimation: Problems in estimation: oHeywood cases. oIllogical standardized parameters. Stage 3: Designing a Study to Produce Empirical Results

Identification Recognizing Identification Problems: 1.Very large standard errors. 2.Inability to invert the information matrix (no solution can be found). 3.Wildly unreasonable estimates including negative error variances. 4.Unstable parameter estimates.

Stage 4: Assessing Measurement Validity Key Issues: Assessing fit: Assessing fit: oGOF. oConstruct validity. Diagnosing problems: Diagnosing problems: oPath estimates. oStandardized residuals. oModification indices. oSpecification search.

Rules of Thumb Assessing Measurement Model Validity Loading estimates can be statistically significant but still be too low to qualify as a good item (standardized loadings below |.5|). In CFA, items with low loadings become candidates for deletion. Loading estimates can be statistically significant but still be too low to qualify as a good item (standardized loadings below |.5|). In CFA, items with low loadings become candidates for deletion. Completely standardized loadings above +1.0 or below -1.0 are out of the feasible range and can be an important indicator of some problem with the data. Completely standardized loadings above +1.0 or below -1.0 are out of the feasible range and can be an important indicator of some problem with the data. Typically, standardized residuals less than |2.5| do not suggest a problem. Typically, standardized residuals less than |2.5| do not suggest a problem. o Standardized residuals greater than |4.0| suggest a potentially unacceptable degree of error that may call for the deletion of an offending item. o Standardized residuals between |2.5| and |4.0| deserve some attention, but may not suggest any changes to the model if no other problems are associated with those two items.

Rules of Thumb Assessing Measurement Model Validity The researcher should use the modification indices only as a guideline for model improvements of those relationships that can theoretically be justified. The researcher should use the modification indices only as a guideline for model improvements of those relationships that can theoretically be justified. Specification searches based on purely empirical grounds are discouraged because they are inconsistent with the theoretical basis of CFA and SEM. Specification searches based on purely empirical grounds are discouraged because they are inconsistent with the theoretical basis of CFA and SEM. CFA results suggesting more than minor modification should be re-evaluated with a new data set. For instance, if more than two out of every 15 measured variables are deleted, then the modifications can not be considered minor. CFA results suggesting more than minor modification should be re-evaluated with a new data set. For instance, if more than two out of every 15 measured variables are deleted, then the modifications can not be considered minor.

CFA Learning Checkpoint 1.What is the difference between EFA and CFA? 2.Describe the four stages of CFA. 3.What is the difference between reflective and formative measurement models? 4.What is “statistical identification” and how can it be avoided? 5.How do you decide if CFA is successful?

The End