Controlling for Common Method Variance in PLS Analysis: The Measured Latent Marker Variable Approach Wynne W. Chin Jason Bennett Thatcher Ryan T. Wright.

Slides:



Advertisements
Similar presentations
ASSESSING RESPONSIVENESS OF HEALTH MEASUREMENTS. Link validity & reliability testing to purpose of the measure Some examples: In a diagnostic instrument,
Advertisements

Inference for Regression
The Simple Linear Regression Model: Specification and Estimation
© 2005 The McGraw-Hill Companies, Inc., All Rights Reserved. Chapter 14 Using Multivariate Design and Analysis.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Statistics for Managers Using Microsoft® Excel 5th Edition
Simple Linear Regression
Statistics for Managers Using Microsoft® Excel 5th Edition
When Measurement Models and Factor Models Conflict: Maximizing Internal Consistency James M. Graham, Ph.D. Western Washington University ABSTRACT: The.
Using Growth Models for Accountability Pete Goldschmidt, Ph.D. Assistant Professor California State University Northridge Senior Researcher National Center.
Bootstrapping in regular graphs
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
LECTURE 16 STRUCTURAL EQUATION MODELING.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Chapter 7 Probability and Samples: The Distribution of Sample Means
Chapter 7 ~ Sample Variability
Introduction to Regression Analysis, Chapter 13,
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
1 Simple Linear Regression 1. review of least squares procedure 2. inference for least squares lines.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Regression and Correlation Methods Judy Zhong Ph.D.
MEASUREMENT MODELS. BASIC EQUATION x =  + e x = observed score  = true (latent) score: represents the score that would be obtained over many independent.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 12-1 Chapter 12 Simple Linear Regression Statistics for Managers Using.
Hypothesis Testing in Linear Regression Analysis
Measurement Error.
1 Least squares procedure Inference for least squares lines Simple Linear Regression.
Chapter Nine Copyright © 2006 McGraw-Hill/Irwin Sampling: Theory, Designs and Issues in Marketing Research.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
FOUNDATIONS OF NURSING RESEARCH Sixth Edition CHAPTER Copyright ©2012 by Pearson Education, Inc. All rights reserved. Foundations of Nursing Research,
Measurement Bias Detection Through Factor Analysis Barendse, M. T., Oort, F. J. Werner, C. S., Ligtvoet, R., Schermelleh-Engel, K.
Chapter 7: Sample Variability Empirical Distribution of Sample Means.
Statistical analysis Outline that error bars are a graphical representation of the variability of data. The knowledge that any individual measurement.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Confirmatory Factor Analysis Psych 818 DeShon. Construct Validity: MTMM ● Assessed via convergent and divergent evidence ● Convergent – Measures of the.
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Stat 112: Notes 2 Today’s class: Section 3.3. –Full description of simple linear regression model. –Checking the assumptions of the simple linear regression.
Chapter 16 Data Analysis: Testing for Associations.
Aron, Aron, & Coups, Statistics for the Behavioral and Social Sciences: A Brief Course (3e), © 2005 Prentice Hall Chapter 12 Making Sense of Advanced Statistical.
G Lecture 81 Comparing Measurement Models across Groups Reducing Bias with Hybrid Models Setting the Scale of Latent Variables Thinking about Hybrid.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Assessing Responsiveness of Health Measurements Ian McDowell, INTA, Santiago, March 20, 2001.
Introduction to Inference Sampling Distributions.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 14-1 Chapter 14 Multiple Regression Model Building Statistics for Managers.
Tutorial I: Missing Value Analysis
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Lesson 5.1 Evaluation of the measurement instrument: reliability I.
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
1 Simple Linear Regression Chapter Introduction In Chapters 17 to 19 we examine the relationship between interval variables via a mathematical.
Stats Methods at IC Lecture 3: Regression.
GOVT 201: Statistics for Political Science
Multiple Regression.
Statistical analysis.
Chapter 4 Basic Estimation Techniques
Multiple Imputation using SOLAS for Missing Data Analysis
Statistical analysis.
Evaluation of measuring tools: validity
CJT 765: Structural Equation Modeling
Reliability & Validity
12 Inferential Analysis.
Making Sense of Advanced Statistical Procedures in Research Articles
Evaluation of measuring tools: reliability
Multiple Regression.
Samples and Populations
12 Inferential Analysis.
MGS 3100 Business Analysis Regression Feb 18, 2016
Chapter 4 (cont.) The Sampling Distribution
Presentation transcript:

Controlling for Common Method Variance in PLS Analysis: The Measured Latent Marker Variable Approach Wynne W. Chin Jason Bennett Thatcher Ryan T. Wright Douglas J. Steel

Outline Common Method Bias Study 1: Evaluating Unmeasured Latent Marker Variable Approach Study 2: A Measured Latent Marker Variable Approach Conclusions

Common Method Bias Several authors have attempted to create approaches for addressing common method bias in SEM which have been applied to PLS Williams et al (1989) Podsakoff et al (2003)

Study One: Evaluating UMLC Approach in PLS Liang et al (2007) created a PLS Unmeasured Latent Marker Construct (UMLC) approach to control for common method variance. Constructing a Model – Take all the indicators for each construct and reusing them to create single indicator constructs. – Link the original constructs to their respective single indicator constructs. – The method construct consisting of all indicators used in the study is linked to all the single indicator constructs

Liang ULMC Approach in PLS Evaluating Common Method Variance – Estimate a model using bootstrapping – Compare the statistical significance of loadings on the method factor – Examine variance explained in loadings and constructs Squared variance of the method loadings was interpreted as variance explained by common method – Lack of significant loadings & smaller method variances viewed as indicators of absence of CMB

Problem: ULMC Not vetted Issues with Liang et al – No proofs – No simulations – No evidence that it worked Issues with UMLC – Richardson et al (2009) demonstrated through a series of simulations that it rarely worked in ML SEM.

Evaluating the ULMC Method

Monte Carlo simulations – 500 datasets of 5,000 in prelis Method bias of different forms and at different levels – Congeneric – Non-Congeneric Estimated models using PLSGraph 3.0

Table 3a. Summary Results of the Scenarios’ PLS ULMC Analysis * ScenarioS1S2S3S4S5 PathMeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev. X →Y (true score) XX→YY XX→A XX→A XX→A XX→A XX→A XX→A YY→B YY→B YY→B YY→B YY→B YY→B M (Method)→A M→A M→A M→A M→A M→A M→B M→B M→B M→B M→B M→B * Scenario 1 (S1) = Latent Item Loadings (LIL) are noncongeneric (NC), Method Loadings (ML) are 0, S2 = LIL are NC, ML are NC at.4, S3 = LIL are NC and ML are NC at.6, S4 = LIL are congeneric (C) and ML are NC at.4, S5 = LIL are C and ML are NC at.6.

Table 3b. Summary Results of the Scenarios’ PLS ULMC Analysis * ScenarioS6S7S8S9S10 PathMeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev.MeanStd. Dev. X →Y (true score) XX→YY XX→A XX→A XX→A XX→A XX→A XX→A YY→B YY→B YY→B YY→B YY→B YY→B Method (M)→A M→A M→A M→A M→A M→A M→B M→B M→B M→B M→B M→B * Scenario 6 (S1) = Latent Item Loadings (LIL) are noncongeneric (NC), Method Loadings (ML) are congeneric (C) at an ave. of.4, S7 = LIL are C, ML are C at an average of.4, S8 = LIL are NC and ML are represented by the method (M) score at 0, S9 = LIL are NC and ML are represented by the method (M) score at.4, S10 = LIL are NC and ML are represented by the method (M) score at.6.

Conclusions of Part One Where Richardson et al (2009) found that a UMLC approach had limited utility for detecting CMB using ML SEM, the same technique applied to PLS had no ability to detect and control for CMB.

Part Two: Measured Latent Marker Variable Approach A measured latent marker variable approach uses a set of unrelated items to each other or to the constructs of interest. By doing so, we capture the just the variance attributable to method, not just to covariance among theoretically connected latent constructs. Can perform construct or item level corrections. Illustrate using simulation with the same parameters as Study 1.

Creating Indicators for MLMV Each indicator must not be in the same domain as constructs found in the research model. Each indicator must be drawn from different unit of analysis than that investigated in the research model. Rather than reliability, ensure all unique and error variances are independent among the set of measures chosen The MLMV must include a minimum of 4 items. A well-designed survey should include these indicators at the end of the instrument.

Construct Level Corrections Create as many CMV control constructs as there are constructs in the model. Each CMV control uses the same entire set of MLMV items. CMV construct is modeled as impacting each model construct. The residuals obtained now represent the model constructs with the CMV effects removed.

PLS Estimates Using CMV of.36

Construct Level Correlations Enter the 12 MLMV as controls by creating a CLC construct for the two model constructs Results in reducing the inflated path of to more closely match the population parameter with an estimate of This represents the impact of construct XX on construct YY holding CLC constant. CLC scores are then used partial out the CMV from both constructs to obtain the partial correlation between XX and YY.

First Step in CLC PLS Estimates using items with 0.36

Assessment of Construct Level Correlations Table 1. Number of Latent Marker Measures and Percentage Reduction in CMV Using CLC # of MLMV Items in CLC Structural Estimate Percent Reduction 100%97%94%91%89%87%83%79%72%63%50%32% While a 12-item CLC effectively captured the simulated CMV, our simulation illustrates that one can use a four item LMV to remove 72 percent variance due to CMV. Given that researchers tend to have limited space on survey instruments to include additional items, these results illustrate that our MLMV approach is flexible enough to be included in surveys of varying lengths.

Item Level Corrections Use the MLMV items to partial out the CMV effects at the measurement item level. Each item measure is regressed on the entire set of MLMV items. The residuals for each item now represent the construct items with the CMV effects removed.

Item Level Corrections The CMV should be replaced with an equivalent amount of random error to be equivalent to the variance of the original measures sans bias. Must obtain an assessment of reliability of the original items by assessing the reliability of original items – R-square obtained from each item to MLMV regression is used. – Specifically, the square root of the R-square multiplied with a number drawn from a normal distribution of mean 0 and standard deviation of 1 is added to each item residual. – This represents the final ILC items used in a PLS analyses.

Item Level Correlations Results in item loadings that are more consistent with a PLS analyses without CMV effects. The estimated loadings varying from 0.76 to are consistent with the tendency of PLS to overestimate the loadings by approximately 10 percent. The estimated structural path of is consistent with an approximate 10 percent underestimation of the population parameter of To correct this, you need to add noise back into the model.

Results of ILC Approach

CLC and ILC Results Overall, both approaches seem to converge to the same results. With CLC, we obtain an accurate estimate of the path estimate at the expense of the loadings. With ILC, we obtain more accurate item loading estimates at the expense of the structural path. – But, the path estimates can be obtained if we compensate for the CMV partialed out.

Results of ILC with Error Compensation

Varying Level and Form of Error Introduced different levels and forms of error into the simulations. Two items for each construct had true score and method loadings of 0.8 and 0.2. Two items were set at 0.7 and 0.4 Two items were set at 0.6 and 0.6 (i.e., equal amounts of true and method effects).

Results with Varying Level of Trait and Method

Results of CLC with Different Trait and Method Impact

Results of ILC with Different Trait and Method (No Error Compensation)

Results of ILC Different Trait and Method with Error Compensation

Comparing Approaches Unmeasured Latent MarkerMeasured Latent Marker Post-HocA Priori UnplannedCarefully planned by researcher Item LevelItem and Construct Level Based on same set of itemsItems drawn at the same time; yet span completely different sets of knowledge Low effortModest effort Discredited, unvetted approach (Chin et al, Forthcoming) Rigorously vetted via simulation; collecting field data

Conclusion We have presented initial evidence of two methods for correcting for common method bias in PLS Path Modelling. CLC is more easily implemented. ILC yields more insight into CMB influence in the measurement model. Approach must be tailored to each study.