Evaluating Multi-Item Scales Health Services Research Design (HS 225A) November 13, 2012, 2-4pm 5-1279 (Public Health)

Slides:



Advertisements
Similar presentations
Estimating Reliability RCMAR/EXPORT Methods Seminar Series Drew (Cobb Room 131)/ UCLA (2 nd Floor at Broxton) December 12, 2011.
Advertisements

Psychometric Evaluation of Multi-Item Scales Questionnaire Design and Testing Workshop February 6, 2013, 1-2:30pm Wilshire Blvd. Suite 710 Los Angeles,
RELIABILITY Reliability refers to the consistency of a test or measurement. Reliability studies Test-retest reliability Equipment and/or procedures Intra-
© McGraw-Hill Higher Education. All rights reserved. Chapter 3 Reliability and Objectivity.
Why Patient-Reported Outcomes Are Important: Growing Implications and Applications for Rheumatologists Ron D. Hays, Ph.D. UCLA Department of Medicine RAND.
Methods for Assessing Safety Culture: A View from the Outside October 2, 2014 (9:30 – 10:30 EDT) Safety Culture Conference (AHRQ Watts Branch Conference.
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 3.3: Inter-rater reliability.
1 Evaluating Multi-Item Scales Using Item-Scale Correlations and Confirmatory Factor Analysis Ron D. Hays, Ph.D. RCMAR.
Psychometric Evaluation of Survey Data Questionnaire Design and Testing Workshop October 24, 2014, 3:30-5:00pm Wilshire Blvd. Suite 710 Los Angeles,
Item Response Theory in Patient-Reported Outcomes Research Ron D. Hays, Ph.D., UCLA Society of General Internal Medicine California-Hawaii Regional Meeting.
Categorical Data Analysis: Stratified Analyses, Matching, and Agreement Statistics Biostatistics March 2007 Carla Talarico.
Social Science Research Design and Statistics, 2/e Alfred P. Rovai, Jason D. Baker, and Michael K. Ponton Internal Consistency Reliability Analysis PowerPoint.
Primer on Evaluating Reliability and Validity of Multi-Item Scales Questionnaire Design and Testing Workshop October 25, 2013, 3:30-5:00pm Wilshire.
PTP 560 Research Methods Week 11 Question on article If p
1 Health-Related Quality of Life as an Indicator of Quality of Care Ron D. Hays, Ph.D. HS216—Quality Assessment: Making the Business Case.
Use of Health-Related Quality of Life Measures to Assess Individual Patients July 24, 2014 (1:00 – 2:00 PDT) Kaiser Permanente Methods Webinar Series Ron.
Patient-Centered Outcomes of Health Care Comparative Effectiveness Research February 3, :00am – 12:00pm CHS 1 Ron D.Hays, Ph.D.
Health-Related Quality of Life as an Indicator of Quality of Care May 4, 2014 (8:30 – 11:30 PDT) HPM216: Quality Assessment/ Making the Business Case for.
SAS PROC IRT July 20, 2015 RCMAR/EXPORT Methods Seminar 3-4pm Acknowledgements: - Karen L. Spritzer - NCI (1U2-CCA )
Tests and Measurements Intersession 2006.
Evaluating Health-Related Quality of Life Measures June 12, 2014 (1:00 – 2:00 PDT) Kaiser Methods Webinar Series Ron D.Hays, Ph.D.
Measuring Health-Related Quality of Life Ron D. Hays, Ph.D. UCLA Department of Medicine RAND Health Program UCLA Fielding School of Public Health
Reliability & Agreement DeShon Internal Consistency Reliability Parallel forms reliability Parallel forms reliability Split-Half reliability Split-Half.
Development of Physical and Mental Health Summary Scores from PROMIS Global Items Ron D. Hays ( ) UCLA Department of Medicine
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Measuring Health-Related Quality of Life
Measurement Models: Exploratory and Confirmatory Factor Analysis James G. Anderson, Ph.D. Purdue University.
Measurement Issues by Ron D. Hays UCLA Division of General Internal Medicine & Health Services Research (June 30, 2008, 4:55-5:35pm) Diabetes & Obesity.
Inter-rater reliability in the KPG exams The Writing Production and Mediation Module.
Item Response Theory (IRT) Models for Questionnaire Evaluation: Response to Reeve Ron D. Hays October 22, 2009, ~3:45-4:05pm
Evaluating Self-Report Data Using Psychometric Methods Ron D. Hays, PhD February 6, 2008 (3:30-6:30pm) HS 249F.
Multi-item Scale Evaluation Ron D. Hays, Ph.D. UCLA Division of General Internal Medicine/Health Services Research
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire Design and Testing.
CJT 765: Structural Equation Modeling Class 8: Confirmatory Factory Analysis.
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire Design and Testing.
Psychometric Evaluation of Questionnaire Design and Testing Workshop December , 10:00-11:30 am Wilshire Suite 710 DATA.
Evaluating Self-Report Data Using Psychometric Methods Ron D. Hays, PhD February 8, 2006 (3:00-6:00pm) HS 249F.
Measurement of Outcomes Ron D. Hays Accelerating eXcellence In translational Science (AXIS) January 17, 2013 (2:00-3:00 pm) 1720 E. 120 th Street, L.A.,
Psychometric Analyses of Survey to Assess Parental Perceptions of Their Children’s Dental Care in the California Healthy Families Program Ron D. Hays October.
Considerations in Comparing Groups of People with PROs Ron D. Hays, Ph.D. UCLA Department of Medicine May 6, 2008, 3:45-5:00pm ISPOR, Toronto, Canada.
Multitrait Scaling and IRT: Part I Ron D. Hays, Ph.D. Questionnaire.
Evaluating Health-Related Quality of Life Measures Ron D. Hays, Ph.D. UCLA GIM & HSR February 9, 2015 (9:00-11:50 am) HPM 214, Los Angeles, CA.
Overview of Item Response Theory Ron D. Hays November 14, 2012 (8:10-8:30am) Geriatrics Society of America (GSA) Pre-Conference Workshop on Patient- Reported.
Evaluating Multi-item Scales Ron D. Hays, Ph.D. UCLA Division of General Internal Medicine/Health Services Research HS237A 11/10/09, 2-4pm,
Evaluating Multi-Item Scales Health Services Research Design (HS 225B) January 26, 2015, 1:00-3:00pm CHS.
1 Measuring Agreement. 2 Introduction Different types of agreement Diagnosis by different methods  Do both methods give the same results? Disease absent.
Evaluating Patient-Reports about Health
Measurement Reliability
Psychometric Evaluation of Items Ron D. Hays
Measures of Agreement Dundee Epidemiology and Biostatistics Unit
UCLA Department of Medicine
Evaluating Patient-Reports about Health
UCLA Department of Medicine
Evaluating Multi-Item Scales
assessing scale reliability
CJT 765: Structural Equation Modeling
Evaluating IRT Assumptions
Final Session of Summer: Psychometric Evaluation
Natalie Robinson Centre for Evidence-based Veterinary Medicine
Evaluating the Psychometric Properties of Multi-Item Scales
Evaluating Multi-Item Scales
Evaluating Multi-item Scales
Multitrait Scaling and IRT: Part I
SEM evaluation and rules
Evaluating Multi-item Scales
Evaluating Multi-Item Scales
Psychometric testing and validation (Multi-trait scaling and IRT)
Introduction to Psychometric Analysis of Survey Data
Evaluating the Significance of Individual Change
UCLA Department of Medicine
Presentation transcript:

Evaluating Multi-Item Scales Health Services Research Design (HS 225A) November 13, 2012, 2-4pm (Public Health)

Reliability Minimum Standards 0.70 or above (for group comparisons) 0.90 or higher (for individual assessment)  SEM = SD (1- reliability) 1/2  95% CI = true score +/ x SEM  if true z-score = 0, then CI: -.62 to +.62  Width of CI is 1.24 z-score units

Two Raters’ Ratings of GOP Debate Performance on Excellent to Poor Scale [1 = Poor; 2 = Fair; 3 = Good; 4 = Very good; 5 = Excellent] 1=Bachman Turner Overdrive (Good, Very Good) 2=Ging Rich (Very Good, Excellent) 3=Rue Paul (Good, Good) 4=Gaylord Perry (Fair, Poor) 5=Romulus Aurelius (Excellent, Very Good) 6=Sanatorium (Fair, Fair) (Target = 6 candidates; assessed by 2 raters)

Cross-Tab of Ratings Rater 1Total PFGVGE P011 F11 G11 VG1012 E Rater 2

Calculating KAPPA P C = (0 x 1) + (2 x 1) + (2 x 1) + (1 x 2) + (1 x 1) =0.19 (6 x 6) P obs. = 2 = Kappa = 0.33– 0.19 =0.17(0.52, 0.77)

Weighted Kappa (Linear and Quadratic) PFGVGE P1.75 (.937).50 (.750).25 (.437)0 F.75 (.937)1.50 (.750).25 (.437) G.50 (.750).75 (.937)1.50 (.750) VG.25 (.437).50 (.750).75 (.937)1 E0.25 (.437).5 (.750).75 (.937)1 W l = 1 – ( i/ (k – 1)) W q = 1 – (i 2 / (k – 1) 2 ) i = number of categories ratings differ by k = n of categories

7 Intraclass Correlation and Reliability ModelIntraclass CorrelationReliability One- way Two- way fixed Two- way random BMS = Between Ratee Mean Square N = n of ratees WMS = Within Mean Square k = n of items or raters JMS = Item or Rater Mean Square EMS = Ratee x Item (Rater) Mean Square

Reliability of Performance Ratings Candidates (BMS) Raters (JMS) Cand. x Raters (EMS) Total Source df SSMS 2-way R = 6 ( ) = (3.13) ICC = 0.80

GOP Presidential Candidates Responses to Two Questions about Their Health 1. Bachman Turner Overdrive (Good, Very Good) 2. Ging Rich (Very Good, Excellent) 3. Rue Paul (Good, Good) 4. Gaylord Perry (Fair, Poor) 5. Romulus Aurelius (Excellent, Very Good) 6. Sanatorium (Fair, Fair) (Target = 6 candidates; assessed by 2 items)

Two-Way Fixed Effects (Cronbach’s Alpha) Respondents (BMS) Items (JMS) Resp. x Items (EMS) Total Source df SSMS Alpha = = 2.93 = ICC = 0.77

Overall Satisfaction of 12 Patients with 6 Doctors (2 patients per doctor) 1. Dr. Overdrive (p1: Good, p2: Very Good) 2. Dr. Rich (p3: Very Good, p4: Excellent) 3. Dr. Paul (p5: Good, p6: Good) 4. Dr. Perry (p7: Fair, p8: Poor) 5. Dr. Aurelius (p9: Excellent, p10: Very Good) 6. Dr. Sanatorium (p11: Fair, p12: Fair) (Target = 6 doctors; assessed by 2 patients each)

Reliability of Ratings of Doctor Respondents (BMS) Within (WMS) Total Source df SSMS 1-way = = 2.80 =

Candidates Perceptions of the U.S. Economy in November & December, Bachman Turner Overdrive (Good, Very Good) 2. Ging Rich (Very Good, Excellent) 3. Rue Paul (Good, Good) 4. Gaylord Perry (Fair, Poor) 5. Romulus Aurelius (Excellent, Very Good) 6. Sanatorium (Fair, Fair) (target = ?; assessed by ?) How would you estimate reliability?

Two Possibilities 1) If target is GOP Presidential candidates, then estimate is same as earlier example about responses to two questions about their health –Alpha = ) If target is month (November vs. December), then data structure for 2 observations is: Nov Dec

8 Hypothesized GI Distress Scales (101 items) 1. Gastroesphageal reflux (29 items) 2. Disrupted swallowing (7 items) 3. Diarrhea (7 items) 4. Bowel incontinence/soilage (4 items) 5. Nausea and vomiting (13 items) 6. Constipation (15 items) 7. Belly pain (8 items) 8. Gas/bloat/flatulence (18 items)

Gastroesphageal Reflux (29 items) In the past 7 days … How often did you have regurgitation—that is, food or liquid coming back up into your throat or mouth without vomiting? –Never –One day –2-6 days –Once a day –More than once a day

Disrupted swallowing (7 items) In the past 7 days … How often did food get stuck in your throat when you were eating? –Never –Rarely –Sometimes –Often –Always

Diarrhea (7 items) In the past 7 days … How many days did you have loose or watery stools? –No days –1 day –2 days –3-5 days –6-7 days

Bowel incontinence (4 items) In the past 7 days … How often did you have bowel incontinence—that is, have an accident because you could not make it to the bathroom in time? –No days –1 day –2-3 days –4-5 days –6-7 days

Nausea and vomiting (13 items) In the past 7 days … How often did you have nausea—that is, a feeling like you could vomit? –Never –Rarely –Sometimes –Often –Always

Constipation (15 items) In the past 7 days … How often did you strain while trying to have bowel movements? –Never –Rarely –Sometimes –Often –Always

Belly pain (8 items) In the past 7 days … How often did you have belly pain? –Never –Rarely –Sometimes –Often –Always

Gas/bloat/flatulence (18 items) In the past 7 days … How often did you feel bloated? –Never –Rarely –Sometimes –Often –Always

Recoding of Item Based on Mean Score for other 28 Items Gastroesphageal Reflux (Gisx22) In the last 7 days … How often did you burp? –Never {1} –One day {2} –2-6 days {3} –Once a day {3} –More than once a day {4}

Recoding of Item Based on Mean Score for other 14 Items Constipation (Gisx62) In the last 7 days … How often did you have a bowel movement? –Never {3} –One day {2} –2-6 days {1} –Once a day {1} –More than once a day {1}

Recoding of Item Based on Mean Score for other 17 Items Gas/Bloat/Flatulence (Gisx105) In the last 7 days … How often did you pass gas? –Never {1} –Rarely (only once or twice a day) {2} –About every 3-4 hours {3} –About every 2 hours {4} –About every hour {4}

Internal Consistency Reliability for 8 Scales (100 items) ScaleNumber of itemsAlpha Gastroesphageal Reflux Disrupted swallowing70.90 Diarrhea70.89 Bowel incontinence40.87 Nausea/vomiting Constipation Belly pain80.93 Gas/bloat Reliability Standards: 0.70 or above for group comparisons; 0.90 for individual assessment + How often did you have a bowel movement? (Gisx62) did not go with Constipation scale.

28 Item-scale correlation matrix

29 Item-scale correlation matrix

Five Item-Scale Correlations Issues 1.Gisx22 (burp) correlates only 0.30 with Gastroesphageal Reflux scale 2, 3, 4. Three items correlate as highly with Disrupted Swallowing as with Gastroesphageal Reflux scale: –Gisx28 (lump in throat frequency) –Gisx29 (lump in throat interfere) –Gisx30 (lump in throat bother) 5. Gisx33 (pain in chest when swallowing food) correlates as highly with Gastroesphageal Reflux as with Disrupted swallowing scale

33 Confirmatory Factor Analysis

Confirmatory Factor Analysis Fit Indices Normed fit index: Non-normed fit index: Comparative fit index:  -  2 null model 2  2 null  2 model 2 - df nullmodel 2 null  df -1  - df 2 model  - 2 null df null 1 - RMSEA = SQRT (λ 2 – df)/SQRT (df (N – 1))

Confirmatory Factor Analysis Eight-factor categorical model fit the data well (CFI=0.95 and RMSEA=0.049). Lower loadings on Gastroesphageal Reflux for: –Gisx22 (burp frequency): 0.38 –Gisx20 (pain behind breastbone): 0.39 Secondary loadings on Gas for: –GIsx48 (thought gas but liquid; BI): 0.22 –GIsx72 (unfinished frequency; Con.): 0.22 –GIsx73 (feeling unfinished bother; Con.):

Correlated Residuals r = 0.58 (Gas/bloat/flatulence) –GI108 (freq. gurgling in belly) with GI109 (freq. gurgling when not hungry) r = 0.53 (Constipation) –GI75 (freq. pain in anus) with GI76 (bother pain in anus) r = 0.50 (Gas/bloat/flatulence) –GI106 (gas interference) with GI107 (gas bother)

Item Response Theory (IRT) IRT models the relationship between a person’s response Y i to the question (i) and his or her level of the latent construct  being measured by positing b ik estimates how difficult it is for the item (i) to have a score of k or more and the discrimination parameter a i estimates the discriminatory power of the item.

Item Responses and Trait Levels Item 1 Item 2 Item 3 Person 1Person 2Person 3 Trait Continuum

Bowel Incontinence Items How often did you soil or dirty your underwear before getting to a bathroom? How often did you have bowel incontinence— that is, have an accident because you could not make it to the bathroom in time? How often did you leak stool or soil your underwear? How often did you think you were going to pass gas, but stool or liquid came out instead?

Graded Response Model Parameters (Incontinence) ItemSlope Threshold 1 Threshold 2 Threshold 3 Threshold 4 Gisx Gisx Gisx Gisx How often did you soil or dirty your underwear before getting to a bathroom? 2. How often did you have bowel incontinence—that is, have an accident because you could not make it to the bathroom in time? 3. How often did you leak stool or soil your underwear? [No days; 1 day; 2-3 days; 4-5 days; 6-7 days] 4. How often did you think you were going to pass gas, but stool or liquid came out instead? [Never; Rarely; Sometimes; Often; Always]

Reliability and SEM For z-scores (mean = 0 and SD = 1): –Reliability = 1 – SE 2 –So reliability = 0.90 when SE = 0.32 For T-scores (mean = 50 and SD = 10): –Reliability = 1 – (SE/10) 2 –So reliability = 0.90 when SE = 3.2

In the past 7 days I was grouchy [1 st question] –Never –Rarely –Sometimes –Often –Always Theta = 56.1 SE = 5.7 (rel. = 0.68)

In the past 7 days … I felt like I was read to explode [2 nd question] –Never –Rarely –Sometimes –Often –Always Theta = 51.9 SE = 4.8 (rel. = 0.77)

In the past 7 days … I felt angry [3 rd question] –Never –Rarely –Sometimes –Often –Always Theta = 50.5 SE = 3.9 (rel. = 0.85)

In the past 7 days … I felt angrier than I thought I should [4 th question] –Never –Rarely –Sometimes –Often –Always Theta = 48.8 SE = 3.6 (rel. = 0.87)

In the past 7 days … I felt annoyed [5 th question] –Never –Rarely –Sometimes –Often –Always Theta = 50.1 SE = 3.2 (rel. = 0.90)

In the past 7 days … I made myself angry about something just by thinking about it. [6 th question] –Never –Rarely –Sometimes –Often –Always Theta = 50.2 SE = 2.8 (rel = 0.92)

Theta and SEM estimates 56 and 6 (reliability =.68) 52 and 5 (reliability =.77) 50 and 4 (reliability =.85) 49 and 4 (reliability =.87) 50 and 3 (reliability =.90) 50 and <3 (reliability =.92)

Thank you. Powerpoint file posted at URL below (freely available for you to use, copy or burn): Contact information: For a good time call or go to:

Appendices ANOVA Computations Candidate/Respondents SS ( )/2 – 38 2 /12 = Rater/Item SS ( )/6 – 38 2 /12 = 0.00 Total SS ( ) – 38 2 /10 = Res. x Item SS= Tot. SS – (Res. SS+Item SS)

options ls=130 ps=52 nocenter; options nofmterr; data one; input id 1-2 rater 4 rating 5; CARDS; ; run; **************;

proc freq; tables rater rating; run; *******************; proc means; var rater rating; run; *******************************************; proc anova; class id rater; model rating=id rater id*rater; run; *******************************************;

data one; input id 1-2 rater 4 rating 5; CARDS; ; run; ******************************************************************; %GRIP(indata=one,targetv=id,repeatv=rater,dv=rating, type=1,t1=test of GRIP macro,t2=); GRIP macro is available at:

data one; input id 1-2 rater1 4 rater2 5; control=1; CARDS; ; run; **************; DATA DUMMY; INPUT id 1-2 rater1 4 rater2 5; CARDS; RUN;

DATA NEW; SET ONE DUMMY; PROC FREQ; TABLES CONTROL*RATER1*RATER2 /NOCOL NOROW NOPERCENT AGREE; *******************************************; data one; set one; *****************************************; proc means; var rater1 rater2; run; *******************************************; proc corr alpha; var rater1 rater2; run;

Guidelines for Interpreting Kappa ConclusionKappaConclusionKappa Poor <.40 Poor < 0.0 Fair Slight Good Fair Excellent >.74 Moderate Substantial Almost perfect Fleiss (1981) Landis and Koch (1977)