Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 1 More details can be found in the “Course Objectives and Content”

Similar presentations


Presentation on theme: "© Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 1 More details can be found in the “Course Objectives and Content”"— Presentation transcript:

1 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 1 More details can be found in the “Course Objectives and Content” handout on the course webpage. Multiple Regression Analysis (MRA) Multiple Regression Analysis (MRA) Do your residuals meet the required assumptions? Test for residual normality Use influence statistics to detect atypical datapoints If your residuals are not independent, replace OLS by GLS regression analysis Use Individual growth modeling Specify a Multi-level Model If your sole predictor is continuous, MRA is identical to correlational analysis If your sole predictor is dichotomous, MRA is identical to a t-test If your several predictors are categorical, MRA is identical to ANOVA If time is a predictor, you need discrete- time survival analysis… If your outcome is categorical, you need to use… Binomial logistic regression analysis (dichotomous outcome) Multinomial logistic regression analysis (polytomous outcome) If you have more predictors than you can deal with, Create taxonomies of fitted models and compare them. Form composites of the indicators of any common construct. Conduct a Principal Components Analysis Use Cluster Analysis Use non-linear regression analysis. Transform the outcome or predictor If your outcome vs. predictor relationship is non-linear, How do you deal with missing data? S052/I.3(c): Applied Data Analysis Roadmap of the Course – What Is Today’s Topic Area? Today’s Topic Area

2 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 2 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability Printed Syllabus – What Is Today’s Topic? Please check inter-connections among the Roadmap, the Daily Topic Area, the Printed Syllabus, and the content of today’s class when you pre-read the day’s materials. Using The Multilevel Model To Obtain Internal Consistency Estimates Of Test Reliability Today, in Section I.3(c) on Using The Multilevel Model To Obtain Internal Consistency Estimates Of Test Reliability, I will: Estimate the internal consistency of a six-item scale that measures teacher satisfaction, using Cronbach’s  statistic (Slides #3-#4) Show how to restack the dataset from its original multivariate format into the corresponind univariate or “person-item” format that is required for multilevel modeling (Slides #5-#6). Demonstrate estimation of internal consistency reliability of the same scale by fitting of an unconditional multilevel model with teacher satisfaction as the outcome, in the person-item dataset. Using The Multilevel Model To Obtain Internal Consistency Estimates Of Test Reliability Today, in Section I.3(c) on Using The Multilevel Model To Obtain Internal Consistency Estimates Of Test Reliability, I will: Estimate the internal consistency of a six-item scale that measures teacher satisfaction, using Cronbach’s  statistic (Slides #3-#4) Show how to restack the dataset from its original multivariate format into the corresponind univariate or “person-item” format that is required for multilevel modeling (Slides #5-#6). Demonstrate estimation of internal consistency reliability of the same scale by fitting of an unconditional multilevel model with teacher satisfaction as the outcome, in the person-item dataset.

3 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 3 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data “Alpha” option in PROC CORR requests a “Cronbach’s Alpha” estimate of internal consistency reliability for the set of indicators listed in the VAR statement I’ve instituted “listwise deletion” to eliminate all cases that are missing data on one or more of the variables being composited. In this analysis, I assume that variables X1 through X6 are observed “indicators” of an underlying latent construct of teacher satisfaction and that we would like to know the reliability that we could anticipate for a scale that composited them together _N_ is a SAS system variable that is present in every SAS dataset to count the cases, beginning with the first.

4 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 4 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data These are “deleted-indicator” estimates of Cronbach’s Alpha internal consistency reliability for a “temporarily reduced” composite that does not include the current indicator. Inspection of these estimates, indicator by indicator, helps identify whether an indicator should be retained in the composite: If an indicator truly belongs in the composite, then the “deleted-indicator” reliability estimate will be lower than the reliability of the full composite. If an indicator does not belong in the composite, then the “deleted-indicator” reliability estimate will be higher than the reliability of the full composite, These are “deleted-indicator” estimates of Cronbach’s Alpha internal consistency reliability for a “temporarily reduced” composite that does not include the current indicator. Inspection of these estimates, indicator by indicator, helps identify whether an indicator should be retained in the composite: If an indicator truly belongs in the composite, then the “deleted-indicator” reliability estimate will be lower than the reliability of the full composite. If an indicator does not belong in the composite, then the “deleted-indicator” reliability estimate will be higher than the reliability of the full composite, Here are estimates of Cronbach’s Alpha internal consistency reliability for the full composite formed from all six standardized indicators. Notice that both the “raw” and the “standardized” reliability estimates are identical in this case, because I chose to standardize the raw scores explicitly before entering them into the reliability estimation. If I had not standardized them explicitly in advance, then the the “raw” and “standardized” relaibility estimates would have differed by an amount that depended on how heterogeneous the sample variances of the raw indicators were. Here are estimates of Cronbach’s Alpha internal consistency reliability for the full composite formed from all six standardized indicators. Notice that both the “raw” and the “standardized” reliability estimates are identical in this case, because I chose to standardize the raw scores explicitly before entering them into the reliability estimation. If I had not standardized them explicitly in advance, then the the “raw” and “standardized” relaibility estimates would have differed by an amount that depended on how heterogeneous the sample variances of the raw indicators were.

5 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 5 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data Teacher #1 Teacher #2 Teacher #3 Teacher #5 PROC PRINT reveals the current organization of the data, in which a teacher’s responses to each of the six indicators are stored in a “multivariate” or “horizontal” format. A teacher’s responses to the six indicators of satisfaction are listed within a single row, each response contained in a separate variable, X1 through X6. PROC PRINT reveals the current organization of the data, in which a teacher’s responses to each of the six indicators are stored in a “multivariate” or “horizontal” format. A teacher’s responses to the six indicators of satisfaction are listed within a single row, each response contained in a separate variable, X1 through X6.

6 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 6 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data To estimate internal consistency reliability using multilevel modeling, we must reorganize the data from a “multivariate” into a “longitudinal” or “univariate” format, in which a teacher’s responses to each of the six indicators of satisfaction are now listed “vertically” beneath each other, in a single variable representing satisfaction, here called X. Teacher #1 Teacher #2 PROC PRINT reveals the impact on the dataset of this reorganization from a “multivariate” to a “univariate” format. Notice that there is now a single measure, X, of teacher satisfaction, containing six values for each teacher, but now distributed across six rows (rather than across six columns, as in the earlier multivariate format).

7 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 7 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data In this new format, we have replicated measurement over teachers. So, we can use multilevel modeling to fit an unconditional model, with X as the outcome and teacher ID as the grouping variable. This lets us estimate the L1 and L2 variance components: Between-teacher variance component captures the true variability in satisfaction across teachers. If we assume that the measurement errors implicit in the different indicators tend to cancel each other out, then we can regard the estimated between-teacher variance component as a measure of the true variability in satisfaction in each indicator. Within-teacher variance component summarizes the average flutter in a single teacher’s ratings of their satisfaction across indicators. As we assume that each indicator is a measure of the same latent construct of satisfaction, then these differences must be due to measurement error. Thus, estimating the within-teacher variance component estimates the measurement error variance of each indicator. In this new format, we have replicated measurement over teachers. So, we can use multilevel modeling to fit an unconditional model, with X as the outcome and teacher ID as the grouping variable. This lets us estimate the L1 and L2 variance components: Between-teacher variance component captures the true variability in satisfaction across teachers. If we assume that the measurement errors implicit in the different indicators tend to cancel each other out, then we can regard the estimated between-teacher variance component as a measure of the true variability in satisfaction in each indicator. Within-teacher variance component summarizes the average flutter in a single teacher’s ratings of their satisfaction across indicators. As we assume that each indicator is a measure of the same latent construct of satisfaction, then these differences must be due to measurement error. Thus, estimating the within-teacher variance component estimates the measurement error variance of each indicator.

8 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 8 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data There are six indicators, X 1 through X 6. Each measures the same true score, . Each has a unique error term,  1 through  6. So, the TOTAL observed score is just the sum of these: Population variance of TOTAL … So Cronbach’s  reliability can be written in terms of the L1 and L2 residual variances in an unconditional multilevel model for X :

9 © Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 9 S052/I.3(c): Multilevel Modeling & Internal Consistency Reliability TSUCCESS data Estimated true variance in total teacher satisfaction:Estimated error variance in single indicator of teacher satisfaction:


Download ppt "© Willett, Harvard University Graduate School of Education, 8/27/2015S052/I.3(c) – Slide 1 More details can be found in the “Course Objectives and Content”"

Similar presentations


Ads by Google