Using the SmartPLS Software Assessment of Measurement Models

Slides:



Advertisements
Similar presentations
Measurement Concepts Operational Definition: is the definition of a variable in terms of the actual procedures used by the researcher to measure and/or.
Advertisements

Using the SmartPLS Software
Using the SmartPLS Software “Structural Model Assessment”
1 The Dark Side of Adaptive Selling Paolo Guenzi SDA Bocconi, Bocconi University, Milano, Italy Luigi De Luca Cardiff Business School, UK Rosann Spiro.
1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Some (Simplified) Steps for Creating a Personality Questionnaire Generate an item pool Administer the items to a sample of people Assess the uni-dimensionality.
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Chapter 4 Validity.
D-1 Management Information Systems for the Information Age Copyright 2004 The McGraw-Hill Companies, Inc. All rights reserved Extended Learning Module.
Factor Analysis There are two main types of factor analysis:
A quick introduction to the analysis of questionnaire data John Richardson.
Principal component analysis
Reliability and Validity. Criteria of Measurement Quality How do we judge the relative success (or failure) in measuring various concepts? How do we judge.
Evaluating a Norm-Referenced Test Dr. Julie Esparza Brown SPED 510: Assessment Portland State University.
Structural Equation Modeling Intro to SEM Psy 524 Ainsworth.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Measurement Concepts & Interpretation. Scores on tests can be interpreted: By comparing a client to a peer in the norm group to determine how different.
1 Using the SmartPLS Software “Structural Model Assessment” All rights reserved ©. Cannot be reproduced or distributed without express written permission.
Using the SmartPLS Software
Copyright © 2001 by The Psychological Corporation 1 The Academic Competence Evaluation Scales (ACES) Rating scale technology for identifying students with.
2 Enter your Paper Title Here. Enter your Name Here. Enter Your Paper Title Here. Enter Your Name Here. ANALYSIS OF THE RELATIONSHIP BETWEEN JOB SATISFACTION.
© Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 1 More details can be found in the “Course Objectives and Content”
Data validation for use in SEM
Introduction to CFA. LEARNING OBJECTIVES: Upon completing this chapter, you should be able to do the following: Distinguish between exploratory factor.
PLS-SEM: Introduction and Overview
Founder & Senior Scholar, DBA Program
Measurement and Scaling
Ch 6 Validity of Instrument
© 2008 The McGraw-Hill Companies, Inc. All rights reserved. M I C R O S O F T ® Creating Diagrams with SmartArt Graphics Lesson 7.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Joe F. Hair, Jr. Founder & Senior Scholar Joe F. Hair, Jr. Founder & Senior Scholar Using the SmartPLS Software Assessment of Measurement Models.
Assessment in Education Patricia O’Sullivan Office of Educational Development UAMS.
Validity. Face Validity  The extent to which items on a test appear to be meaningful and relevant to the construct being measured.
Research Methodology Lecture No :24. Recap Lecture In the last lecture we discussed about: Frequencies Bar charts and pie charts Histogram Stem and leaf.
Counseling Research: Quantitative, Qualitative, and Mixed Methods, 1e © 2010 Pearson Education, Inc. All rights reserved. Basic Statistical Concepts Sang.
Tests and Measurements Intersession 2006.
Assessing the Quality of Research
Chapter 7 Measurement and Scaling Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin.
Quantitative SOTL Research Methods Krista Trinder, College of Medicine Brad Wuetherick, GMCTE October 28, 2010.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Appraisal and Its Application to Counseling COUN 550 Saint Joseph College For Class # 3 Copyright © 2005 by R. Halstead. All rights reserved.
McGraw-Hill/Irwin © The McGraw-Hill Companies, All Rights Reserved TECHNOLOGY PLUG-IN T6 Basic Skills Using Access.
Types of Validity Content Validity Criterion Validity Construct Validity Predictive Validity Concurrent Validity Convergent Validity Discriminant Validity.
CHAPTER OVERVIEW The Measurement Process Levels of Measurement Reliability and Validity: Why They Are Very, Very Important A Conceptual Definition of Reliability.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
MEASUREMENT. MeasurementThe assignment of numbers to observed phenomena according to certain rules. Rules of CorrespondenceDefines measurement in a given.
MOI UNIVERSITY SCHOOL OF BUSINESS AND ECONOMICS CONCEPT MEASUREMENT, SCALING, VALIDITY AND RELIABILITY BY MUGAMBI G.K. M’NCHEBERE EMBA NAIROBI RESEARCH.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
© (2015, 2012, 2008) by Pearson Education, Inc. All Rights Reserved Chapter 11: Correlational Designs Educational Research: Planning, Conducting, and Evaluating.
Exploratory Factor Analysis
Applied Quantitative Analysis and Practices
1 Excel Lesson 2 Organizing the Worksheet & Formulas Microsoft Office 2010 Introductory Pasewark & Pasewark.
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
©2005, Pearson Education/Prentice Hall CHAPTER 6 Nonexperimental Strategies.
Principal Component Analysis
TEST SCORES INTERPRETATION - is a process of assigning meaning and usefulness to the scores obtained from classroom test. - This is necessary because.
Chapter 14 EXPLORATORY FACTOR ANALYSIS. Exploratory Factor Analysis  Statistical technique for dealing with multiple variables  Many variables are reduced.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
Reliability Analysis.
Using EduStat© Software
Evaluation of measuring tools: validity
Reliability and Validity of Measurement
IS6000 – Class 10 Introduction to SmartPLS (&SPSS)
PLS-SEM: Introduction and Overview
Reliability Analysis.
Data validation for use in SEM
Examining Data.
Chapter 7 Excel Extension: Now You Try!
Chapter 13 Excel Extension: Now You Try!
Presentation transcript:

Using the SmartPLS Software Assessment of Measurement Models Joe F. Hair, Jr. Founder & Senior Scholar

Reflective Measurement Models Stage 5a All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Corporate Reputation Extended Model All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Extended Reputation Model Constructs Outcome Reputation Constructs (endogenous) CUSL = loyalty (3 items) COMP = competence (3 items) CUSA = satisfaction (1 item) LIKE = likability (3 items) Driver Constructs (exogenous) QUAL = quality of a company’s products/services and customer orientation (8 items) PERF = economic and managerial performance (5 items) CSOR = corporate social responsibility (5 items) ATTR = attractiveness (3 items) All rights reserved ©. Cannot be reproduced or distributed without express written permission from Prentice-Hall, McGraw-Hill, Sage, SmartPLS, and session presenters.

Reflective Measurement Models To evaluate reflectively measured models, we examine the below: outer loadings composite reliability average variance extracted (AVE = convergent validity) discriminant validity All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

When you select the Default Report this is the screen you will get. Run the PLS algorithm to obtain information to evaluate Reflective Measurement Models To access the information to evaluate reflective models select one of the reports under this tab. All outer loadings of the reflective constructs COMP, CUSL, and LIKE are well above the minimum threshold value of .708. The loadings range from a low of 0.7985 to a high of 0.9173. When you select the Default Report this is the screen you will get. To eliminate the unnecessary options on the navigation tree click on the minus sign on the left side. You will get the simplified screen on the next slide. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

The loadings range from a low of 0.7985 to a high of 0.9173. Outer Loadings All outer loadings of the reflective constructs COMP, CUSL, and LIKE are well above the minimum threshold value of .708. The loadings range from a low of 0.7985 to a high of 0.9173. The “Toggle Zeros” button in the task bar (top left of screen) was used to improve the readability of the results table above. This button suppresses the zeros in the table. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Composite Reliability vs. Cronbach Alpha? Reliability results for the Reputation model are in the Default Report under Quality Criteria and Overview All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Composite Reliability All three reflective constructs have high levels of internal consistency reliability, as demonstrated by the above composite reliability values. To obtain the above table that shows the AVE, Composite reliability, Communality, Redundancy, etc., left click on the Overview tab under the Quality Criteria. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

What is Convergent Validity and Discriminant Validity?

Discriminant validity is not present in the above constructs Discriminant validity is not present in the above constructs. Correlation squared (variance shared between constructs = 64%) is larger than the AVE of Y1 (only 0.55 – variance shared within construct = 55%).

Average Variance Extracted = AVE The AVE values (convergent validity) are well above the minimum required level of .50, thus demonstrating convergent validity for all three constructs. To obtain the above table that shows the AVE, left click on the Overview tab under the Quality Criteria. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Discriminant Validity The off-diagonal values in the above matrix are the correlations between the latent constructs. To obtain the shared values between the constructs you must square these correlations. See next slide where this calculation is shown. The results on the next slide indicate there is discriminant validity between all the constructs. To obtain the above table that includes information to determine the Fornell-Larcker criterion for discriminant validity, left click on the Latent Variable Correlations tab under the Quality Criteria. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Discriminant Validity – Fornell-Larcker Criterion Interconstruct Correlations COMP CUSA CUSL LIKE 1 0.4356 0.4496 0.6892 0.6452 0.5284 0.6146 Squared Interconstruct Correlations 0.6806 0.1897 Single-Item Construct 0.0000 0.2021 0.4750 0.7484 0.4163 0.2792 0.3777 0.7471 Note: diagonal = AVEs All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Discriminant Validity – Cross Loadings Criterion Comparing the loadings across the columns in the above matrix indicates that an indicator’s loadings on its own construct are in all cases higher than all of its cross loadings with other constructs. The results indicate there is discriminant validity between all the constructs based on the cross loadings criterion. To obtain the above table that shows the cross loadings to assess discriminant validity, left click on the Latent Cross Loadings tab under the Quality Criteria. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Formative Measurement Models Stage 5b All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Evaluating Formative Measurement Models Empirical assessment of formative measurement models is not the same as with reflective measurement models. This is because the indicators theoretically represent the construct’s independent causes and thus do not necessarily correlate highly. As a result, internal consistency reliability measures such as Cronbach Alpha are not appropriate. Instead, researchers should focus on establishing content validity before empirically evaluating formatively measured constructs. This process requires ensuring that the formative indicators capture all (or at least major) facets of the construct. All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Corporate Reputation Extended Model All rights reserved ©. Cannot be reproduced or distributed without express written permission from Sage, Prentice-Hall, McGraw-Hill, SmartPLS, and session presenters.

Corporate Reputation Extended Model The extended corporate reputation model has three main conceptual/theoretical components: (1) the target constructs of interest (i.e., CUSA and CUSL); (2) the two corporate reputation dimensions, COMP and LIKE, that represent key determinants of the target constructs; and (3) the four exogenous driver constructs (i.e., ATTR, CSOR, PERF, and QUAL) of the two corporate reputation dimensions.

Indicators for SEM Model Exogenous Constructs – Assessing Content Validity –