Item Response Theory Dan Mungas, Ph.D. Department of Neurology University of California, Davis.

Slides:



Advertisements
Similar presentations
MEASUREMENT Goal To develop reliable and valid measures using state-of-the-art measurement models Members: Chang, Berdes, Gehlert, Gibbons, Schrauf, Weiss.
Advertisements

Test Development.
Advanced Methods and Analysis for the Learning and Social Sciences PSY505 Spring term, 2012 January 23, 2012.
Item Response Theory in a Multi-level Framework Saralyn Miller Meg Oliphint EDU 7309.
Item Response Theory in Health Measurement
Scaling Session Measurement implies “assigning numbers to objects or events…” Distinguish two levels: we can assign numbers to the response levels for.
Introduction to Item Response Theory
AN OVERVIEW OF THE FAMILY OF RASCH MODELS Elena Kardanova
Models for Measuring. What do the models have in common? They are all cases of a general model. How are people responding? What are your intentions in.
CH. 9 MEASUREMENT: SCALING, RELIABILITY, VALIDITY
Overview of field trial analysis procedures National Research Coordinators Meeting Windsor, June 2008.
Item Response Theory. Shortcomings of Classical True Score Model Sample dependence Limitation to the specific test situation. Dependence on the parallel.
© UCLES 2013 Assessing the Fit of IRT Models in Language Testing Muhammad Naveed Khalid Ardeshir Geranpayeh.
Chapter 9 Flashcards. measurement method that uses uniform procedures to collect, score, interpret, and report numerical results; usually has norms and.
Why Scale -- 1 Summarising data –Allows description of developing competence Construct validation –Dealing with many items rotated test forms –check how.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Item Analysis: Classical and Beyond SCROLLA Symposium Measurement Theory and Item Analysis Modified for EPE/EDP 711 by Kelly Bradley on January 8, 2013.
Measurement and Data Quality
Item Response Theory. What’s wrong with the old approach? Classical test theory –Sample dependent –Parallel test form issue Comparing examinee scores.
Measurement 102 Steven Viger Lead Psychometrician
1 Item Analysis - Outline 1. Types of test items A. Selected response items B. Constructed response items 2. Parts of test items 3. Guidelines for writing.
DIFFERENTIAL ITEM FUNCTIONING AND COGNITIVE ASSESSMENT USING IRT-BASED METHODS Jeanne Teresi, Ed.D., Ph.D. Katja Ocepek-Welikson, M.Phil.
Introduction to plausible values National Research Coordinators Meeting Madrid, February 2010.
Modern Test Theory Item Response Theory (IRT). Limitations of classical test theory An examinee’s ability is defined in terms of a particular test The.
Translation and Cross-Cultural Equivalence of Health Measures.
Introduction Neuropsychological Symptoms Scale The Neuropsychological Symptoms Scale (NSS; Dean, 2010) was designed for use in the clinical interview to.
The ABC’s of Pattern Scoring Dr. Cornelia Orr. Slide 2 Vocabulary Measurement – Psychometrics is a type of measurement Classical test theory Item Response.
Population All members of a set which have a given characteristic. Population Data Data associated with a certain population. Population Parameter A measure.
PTP 560 Research Methods Week 8 Thomas Ruediger, PT.
Measuring Mathematical Knowledge for Teaching: Measurement and Modeling Issues in Constructing and Using Teacher Assessments DeAnn Huinker, Daniel A. Sass,
CHAPTER 6, INDEXES, SCALES, AND TYPOLOGIES
SAS PROC IRT July 20, 2015 RCMAR/EXPORT Methods Seminar 3-4pm Acknowledgements: - Karen L. Spritzer - NCI (1U2-CCA )
1 Assessing the Minimally Important Difference in Health-Related Quality of Life Scores Ron D. Hays, Ph.D. UCLA Department of Medicine October 25, 2006,
EVIDENCE ABOUT DIAGNOSTIC TESTS Min H. Huang, PT, PhD, NCS.
MEASUREMENT: SCALE DEVELOPMENT Lu Ann Aday, Ph.D. The University of Texas School of Public Health.
1 Item Analysis - Outline 1. Types of test items A. Selected response items B. Constructed response items 2. Parts of test items 3. Guidelines for writing.
1 EPSY 546: LECTURE 1 SUMMARY George Karabatsos. 2 REVIEW.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
The ABC’s of Pattern Scoring
Item Response Theory (IRT) Models for Questionnaire Evaluation: Response to Reeve Ron D. Hays October 22, 2009, ~3:45-4:05pm
The Practice of Social Research Chapter 6 – Indexes, Scales, and Typologies.
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire Design and Testing.
Item Factor Analysis Item Response Theory Beaujean Chapter 6.
NATIONAL CONFERENCE ON STUDENT ASSESSMENT JUNE 22, 2011 ORLANDO, FL.
Reliability performance on language tests is also affected by factors other than communicative language ability. (1) test method facets They are systematic.
Item Response Theory in Health Measurement
Item Analysis: Classical and Beyond SCROLLA Symposium Measurement Theory and Item Analysis Heriot Watt University 12th February 2003.
Chapter 6 - Standardized Measurement and Assessment
The Design of Statistical Specifications for a Test Mark D. Reckase Michigan State University.
2. Main Test Theories: The Classical Test Theory (CTT) Psychometrics. 2011/12. Group A (English)
Item Response Theory Dan Mungas, Ph.D. Department of Neurology
Two Approaches to Estimation of Classification Accuracy Rate Under Item Response Theory Quinn N. Lathrop and Ying Cheng Assistant Professor Ph.D., University.
Lesson 2 Main Test Theories: The Classical Test Theory (CTT)
5. Evaluation of measuring tools: reliability Psychometrics. 2011/12. Group A (English)
IRT Equating Kolen & Brennan, 2004 & 2014 EPSY
Instrument Development and Psychometric Evaluation: Scientific Standards May 2012 Dynamic Tools to Measure Health Outcomes from the Patient Perspective.
Evaluating Patient-Reports about Health
Friday Harbor Laboratory University of Washington August 22-26, 2005
UCLA Department of Medicine
Evaluating Patient-Reports about Health
UCLA Department of Medicine
Item Analysis: Classical and Beyond
Paul K. Crane, MD MPH Dan M. Mungas, PhD
Final Session of Summer: Psychometric Evaluation
Evaluation of measuring tools: reliability
Spanish and English Neuropsychological Assessment Scales - Guiding Principles and Evolution Friday Harbor Psychometrics Workshop 2005.
Introduction to IRT for non-psychometricians
Item Analysis: Classical and Beyond
Multitrait Scaling and IRT: Part I
Item Analysis: Classical and Beyond
Presentation transcript:

Item Response Theory Dan Mungas, Ph.D. Department of Neurology University of California, Davis

What is it? Why should anyone care?

IRT Basics

Item Response Theory - What Is It Modern approach to psychometric test development –Mathematical measurement theory –Associated numeric and computational methods Widely used in large scale educational, achievement, and aptitude testing More than 40 years of conceptual and methodological development

Item Response Theory - Methods Dataset consists of rectangular table –rows correspond to subjects –columns correspond to items IRT applications simultaneously estimate subject ability and item parameters –iterative, maximum likelihood estimation algorithms

Physical Function Scale Hays, Morales & Reise (2000) ItemLIMITEDLIMITEDNOT LIMITED A LOTA LITTLEAT ALL Vigorous activities, running, Lifting heavy objects, Strenuous sports123 Climbing one flight123 Walking more than 1 mile123 Walking one block123 Bathing / dressing self123 Preparing meals / doing laundry123 Shopping123 Getting around inside home123 Feeding self123

Basic Data Structure SubjectItem1Item2Item3Item4 S1X 11 X 12 X 13 X 14 S2X 21 X 22 X 23 X 24 S3X 31 X 32 X 33 X 34 S4X 41 X 42 X 43 X 44

Item Response Theory - Basic Results Item parameters –difficulty –discrimination –correction for guessing most applicable for multiple choice items Subject Ability (in the psychometric sense) –Capacity to successfully respond to test items (or propensity to respond in a certain direction) –Net result of all genetic and environmental influences –Measured by scales composed of homogenous items Item difficulty and subject ability are on the same scale

Item Response Theory - Fundamental Assumptions Unidimensionality - items measure a homogenous, single domain Local independence - covariance among items is determined only by the latent dimension measured by the item set

IRT Models 1PL (Rasch) –Only Difficulty and Ability are estimated –Discrimination is assumed to be equal across items 2PL –Discrimination, Difficulty and Ability are estimated –Guessing is assumed to not have an effect 3PL –Discrimination, Difficulty, Guessing, and Ability are estimated (multiple choice items)

Item Response Theory - Invariance Properties Invariance requires that basic assumptions are met Item parameters are invariant across different samples –Within the range of overlap of distributions –Distributions of samples can differ Ability estimates are invariant across different item sets –Assumes that ability range of items spans ability range of subjects that is of interest

Item Response Theory - Outcomes Item-Level Results –Item Characteristic Curve (ICC) non-linear function relating ability to probability of correct response to item –Item Information Curve (IIC) non-linear function showing precision of measurement (reliability) at different ability points –Both curves are defined by the item parameters

Item Characteristic Curves

Information Curves

Item Response Theory - Outcomes Test-Level Results –Test Characteristic Curve (TCC) non-linear function relating ability to expected total test score –Test Information Curve (TIC) non-linear function showing precision of measurement (reliability) at different ability points

Test Characteristic Curve Mini-Mental State Examination

Test Information Curves for Mattis Dementia Rating Scale and IRT Derived Scales

Why Do We Care - Applications of IRT in Health Care Settings Refined scoring of tests Characterization of psychometric properties of existing tests Construction of new tests

Test Scoring IRT permits refined scoring of items that allows for differential weighting of items based on their item parameters

Physical Function Scale Hays, Morales & Reise (2000) ItemLIMITEDLIMITEDNOT LIMITED A LOTA LITTLEAT ALL Vigorous activities, running, Lifting heavy objects, Strenuous sports123 Climbing one flight123 Walking more than 1 mile123 Walking one block123 Bathing / dressing self123 Preparing meals / doing laundry123 Shopping123 Getting around inside home123 Feeding self123

How to Score Test Simple approach: there are numbers that will be circled; total these up, and there we have a score. But: should “limited a lot” for walking a mile receive the same weight as “limited a lot” in getting around inside the home? Should “limited a lot” for walking one block be twice as bad as “limited a little” for walking one block?

How IRT Can Help IRT provides us with a data-driven means of rational scoring for such measures Items that are more discriminating are given greater weight In practice, the simple sum score is often very good; improvement is at the margins

Description of Psychometric Properties The Test Information Curve (TIC) shows reliability that continuously varies by ability –Depicts ability levels associated with high and low reliability The standard error of measurement is directly related to information value (I(  )) –SEM  = 1 / sqrt(I(  )) SEM  and I(  ) also have a direct correspondence to traditional r –r  = 1 - 1/ I(  )

I(  ), SEM, r I(  ) SEM (s.d. units)r

TICs for English and Spanish language Versions of Two Scales

Construction of New Scales Items can be selected to create scales with desired measurement properties Can be used for prospective test development Can be used to create new scales from existing tests/item pools

TICs from an Existing Global Cognition Scale and Re-Calibrated Existing Cognitive Tests

Principles of Scale Construction Information corresponds to assessment goals –Broad and flat TIC for longitudinal change measure in population with heterogenous ability –For selection or diagnostic test, peak at point of ability continuum where discrimination is most important

Other Issues In IRT Polytomous IRT models are available –Useful for ordinal (Likert) rating scales Each possible score of the item (minus 1) is treated like a separate item with a different difficulty parameter Information is greater for polytomous item than for the same item dichotomized at a cutpoint

Other Issues in IRT Applicable to broad range of content domains IRT certainly applies to cognitive abilities Also applies to other health outcomes –Quality of life –Physical function –Fatigue –Depression –Pain

Other Issues in IRT Differential Item Function - Test Bias IRT provides explicit methods to evaluate and quantify the extent to which items and tests have different measurement properties in different groups –e.g. racial and ethnic groups, linguistic groups, gender

English and Spanish Item Characteristic Curves for “Lamb/Cordero” Item

English and Spanish Item Characteristic Curves for “Stone/Piedra” Item

Challenges/ Limitations of IRT Large samples required for stable estimation – for 1PL – for 2PL – for 3PL Analytic methods are labor intensive –There are a number of (expensive) applications readily available for IRT analyses –Evaluation of basic assumptions, identification of appropriate model, and systematic IRT analysis require considerable expertise and labor