Rasch trees: A new method for detecting differential item functioning in the Rasch model Carolin Strobl Julia Kopf Achim Zeileis.

Slides:



Advertisements
Similar presentations
Chapter 7 Classification and Regression Trees
Advertisements

Accounting for Individual Differences in Bradley-Terry Models by Means of Recursive Partitioning Carolin Strobl Florian Wickelmaier Achim Zeileis.
DIF Analysis Galina Larina of March, 2012 University of Ostrava.
How Should We Assess the Fit of Rasch-Type Models? Approximating the Power of Goodness-of-fit Statistics in Categorical Data Analysis Alberto Maydeu-Olivares.
CHAPTER 2 Building Empirical Model. Basic Statistical Concepts Consider this situation: The tension bond strength of portland cement mortar is an important.
Analysis of variance (ANOVA)-the General Linear Model (GLM)
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
HSRP 734: Advanced Statistical Methods July 24, 2008.
Lecture 6 (chapter 5) Revised on 2/22/2008. Parametric Models for Covariance Structure We consider the General Linear Model for correlated data, but assume.
Overview of Main Survey Data Analysis and Scaling National Research Coordinators Meeting Madrid, February 2010.
Chapter 7 – Classification and Regression Trees
Chapter 7 – Classification and Regression Trees
Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008.
Part I – MULTIVARIATE ANALYSIS
Basic Data Mining Techniques Chapter Decision Trees.
Basic Data Mining Techniques
Handling Categorical Data. Learning Outcomes At the end of this session and with additional reading you will be able to: – Understand when and how to.
G Lecture 51 Estimation details Testing Fit Fit indices Arguing for models.
Chapter 7 Correlational Research Gay, Mills, and Airasian
Experimental Group Designs
Chapter 14 Inferential Data Analysis
Ensemble Learning (2), Tree and Forest
Decision Tree Models in Data Mining
A comparison of exposure control procedures in CATs using the 3PL model.
Identification of Misfit Item Using IRT Models Dr Muhammad Naveed Khalid.
DIFFERENTIAL ITEM FUNCTIONING AND COGNITIVE ASSESSMENT USING IRT-BASED METHODS Jeanne Teresi, Ed.D., Ph.D. Katja Ocepek-Welikson, M.Phil.
Biostatistics Case Studies 2005 Peter D. Christenson Biostatistician Session 5: Classification Trees: An Alternative to Logistic.
1 1 Slide Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple Coefficient of Determination n Model Assumptions n Testing.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Chapter 9 – Classification and Regression Trees
Zhangxi Lin ISQS Texas Tech University Note: Most slides are from Decision Tree Modeling by SAS Lecture Notes 5 Auxiliary Uses of Trees.
Slide 1 The SPSS Sample Problem To demonstrate these concepts, we will work the sample problem for logistic regression in SPSS Professional Statistics.
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
Multiple Regression The Basics. Multiple Regression (MR) Predicting one DV from a set of predictors, the DV should be interval/ratio or at least assumed.
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
April 4 Logistic Regression –Lee Chapter 9 –Cody and Smith 9:F.
1 G Lect 14M Review of topics covered in course Mediation/Moderation Statistical power for interactions What topics were not covered? G Multiple.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Chapter 13 Multiple Regression
Chapter 14 Repeated Measures and Two Factor Analysis of Variance
SW 983 Missing Data Treatment Most of the slides presented here are from the Modern Missing Data Methods, 2011, 5 day course presented by the KUCRMDA,
One-Way Analysis of Covariance (ANCOVA)
The Impact of Missing Data on the Detection of Nonuniform Differential Item Functioning W. Holmes Finch.
Chapter 22: Building Multiple Regression Models Generalization of univariate linear regression models. One unit of data with a value of dependent variable.
Chapter 13 Repeated-Measures and Two-Factor Analysis of Variance
AMMBR II Gerrit Rooks. Checking assumptions in logistic regression Hosmer & Lemeshow Residuals Multi-collinearity Cooks distance.
Applied Quantitative Analysis and Practices
Discrepancy between Data and Fit. Introduction What is Deviance? Deviance for Binary Responses and Proportions Deviance as measure of the goodness of.
Latent regression models. Where does the probability come from? Why isn’t the model deterministic. Each item tests something unique – We are interested.
Item Response Theory in Health Measurement
Logistic Regression Analysis Gerrit Rooks
Tutorial I: Missing Value Analysis
4 basic analytical tasks in statistics: 1)Comparing scores across groups  look for differences in means 2)Cross-tabulating categoric variables  look.
Business Research Methods
More on regression Petter Mostad More on indicator variables If an independent variable is an indicator variable, cases where it is 1 will.
Classification and Regression Trees
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Logistic Regression Categorical Data Analysis.
Hypothesis Testing. Statistical Inference – dealing with parameter and model uncertainty  Confidence Intervals (credible intervals)  Hypothesis Tests.
Biostatistics Regression and Correlation Methods Class #10 April 4, 2000.
Eco 6380 Predictive Analytics For Economists Spring 2016 Professor Tom Fomby Department of Economics SMU.
The Invariance of the easyCBM® Mathematics Measures Across Educational Setting, Language, and Ethnic Groups Joseph F. Nese, Daniel Anderson, and Gerald.
Classification Tree Interaction Detection. Use of decision trees Segmentation Stratification Prediction Data reduction and variable screening Interaction.
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
Multiple Regression.
CJT 765: Structural Equation Modeling
Making Sense of Advanced Statistical Procedures in Research Articles
Multiple Regression.
A Gentle Introduction to Linear Mixed Modeling and PROC MIXED
Statistics II: An Overview of Statistics
Presentation transcript:

Rasch trees: A new method for detecting differential item functioning in the Rasch model Carolin Strobl Julia Kopf Achim Zeileis

Introduction to DIF  Most DIF methods are based on the comparison of the item parameter estimates between two or more pre- specified groups.  Can be interpreted straightforwardly  Cannot rule out the influence from factors that are not pre-identified in analyses.  The latent class (or mixture) approach (Rost, 1990)  No straightforward interpretation of the resulting groups.

A new method  Recursively test all groups that can be defined according to (combinations of) the available covariate.  Model-based recursive partitioning  Related to classification and regression trees (CART)  makes no assumption about a data- generating function  Recursively partitions observations into increasingly smaller subgroups, whose members are increasingly similar in the outcome variable.  Data-driven, exploratory approach

 Steps 1. One joint Rasch model is fit for all subjects. 2. It is tested statistically whether the item parameters differ along any of the covariates. 3. Select the splitting variable and the optimal cutpoint that could achieve the maximum partitioned log-likelihood. 4. Split the sample according to step 3’s suggestion. 5. Repeat steps 1-4 until a stopping criterion is reached.

Step 1: Estimate item parameters  Use the conditional maximum likelihood

Step 2: Examine parameter instability to split samples Individual contributions to the score function:

Examine structural change  Use the generalized M-fluctuation tests  For numeric covariates  For categorical covariates

Step 3: Select the cutpoint  The partitioned log-likelihood: This formulate does not describe how the proposed method examine more than one cutpoints in a single test.

Computations  In step 2, score test (or termed as Lagrange multiplier test) is used.  More efficient  In step 3, the likelihood ratio test is used.  Using different random samples from the same data might yield different values for the optimal cutpoint.  Advantages of the two-step approach  More efficient  Avoid variable selection bias

Stopping criteria  Stop when no significant instability with any covariates.  p =.05  Stop when sample sizes per node reach the pre-specified minimal node-size.  Bonferroni adjustment on the p value

Simulation study  Compare the Rasch tree with LR  Criterion  Type I error rate and power  Root mean squared error (RMSE) of parameter estimation  Adjusted Rand index (ARI): the agreement between the true and the recovered partition.  Bias, variance and mean squared error (MSE) of cutpoint estimation.  Computation time.

General settings  5000 replications for each condition  20 items  The overall sample size was 500.

Simulation study 1  Settings  For the LR test, numeric covariates are split at the median to define the reference and the focus groups.  DIF size =1.5  Only one covariate (either binary or numeric)  Cutpoint location for the numeric covariate is median or 80.

Simulation study 1: Results

Simulation study 1: Results (Cont.)

Simulation study 2  Settings  DIF size =1.5  Only one binary covariate  Ability difference: -0.5 and +0.5

Simulation study 2: Results

Simulation study 2: Results (cont.)

Simulation study 3  Settings  DIF size =1.5  DIF patterns  Binary  U-shaped: young and old subjects vs. middle- aged subjects.  Interaction between two covariates  Cutpoint locations at the medium or 80.  For LR test, two levels of the binary covariate or two groups using a median; and Bonferroni adjustment is used.

Simulation study 3 (Cont.)  Power  For LR test: the percentage of replications in which a test for DIF for the two pre-specified groups is sig.  For Rasch tree: the percentage of replications in which at least one split is made by the tree.

Simulation study 3: Results

Simulation study 3: Results (cont.) These should not be called power because a wrong covariate is used.

An empirical example  An online quiz of general knowledge  1056 university students enrolled in the federal state of Bavaria  History with 9 items  Use gender and age as covariates

It will be more clear if the figures of different nodes are combined in one figure using different lines to indicate the item difficulty estimates.

General comments from Prof. Wang  Should look for methods to make a model identified, not using equal mean difficulty.  If only an interaction exists and no main effects from grouping members, can this model detect DIF items?  When more DIF items are in data, analyses on the residuals will be biased and this approach might not be able effectively detect DIF items.

Comments  Might not be applicable to long test due to the use of conditional ML.  Bonferroni adjustment should be always used in Rasch tree.  This model didn't detect the DIF bias in individual item, and it should be called DTF.  Why is the minimal node-size set at 20?  Mixture approach finds covariates/ groups during estimation. Thus, it might outperform than the proposed approach when the membership was unobserved.

Future studies  Extend its use to other models:  the partial credit model  2PL, and 3PL models.  The extension to multiway splits?  O’Brien SM (2004). “Cutpoint Selection for Categorizing a Continuous Predictor.” Biometrics, 60, 504–509.  Is it possible used for dimensionality assessment?