Download presentation
1
Methods for Estimating Reliability
Dr. Shahram Yazdani
2
Types of Reliability Inter-Rater or Inter-Observer Reliability:
Used to assess the degree to which different raters or observers give consistent estimates of the same phenomenon Test-Retest Reliability: Used to assess the consistency of a measure from one time to another Parallel-Forms Reliability: Used to assess the consistency of the results of two tests constructed same way from the same content domain Internal Consistency Reliability: Used to assess the consistency of results across items within a test Dr. Shahram Yazdani
3
Interrater or Interobserver Reliability
object or phenomenon observer 1 observer 2 = ? Dr. Shahram Yazdani
4
Inter-rater Reliability
Statistics used Nominal/categorical data Kappa statistic Ordinal data Kendall’s tau to see if pairs of ranks for each of several individuals are related Two judges rate 20 elementary school children on an index of hyperactivity and rank order them Interval or ratio data Pearson r using data obtained from the hyperactivity index Dr. Shahram Yazdani
5
Test-Retest Reliability
= Stability over Time test time 1 time 2 Dr. Shahram Yazdani
6
Test-Retest Reliability
Statistics used Pearson r or Spearman rho Important caveat Correlation decreases over time because error variance INCREASES (and may change in nature) Closer in time the two scores were obtained, the more the factors which contribute to error variance are the same Dr. Shahram Yazdani
7
Parallel-Forms Reliability
form B form A Stability Across Forms = time 1 time 2 Dr. Shahram Yazdani
8
Parallel Forms Reliability
Statistic used Pearson r or Spearman rho Important caveat Even when randomly chosen, the two forms may not be truly parallel Dr. Shahram Yazdani
9
Internal consistency Internal consistency
Average inter-item correlation Average item total correlation Split-half reliability Dr. Shahram Yazdani
10
Average Inter-item Correlation
Definition: calculate correlation of each item (Pearson r) with all other items. Dr. Shahram Yazdani
11
Internal Consistency Reliability
item 1 Average Inter-Item Correlation item 2 I1 I2 I3 I4 I5 I6 I1 I2 I3 I4 I5 I6 1.00 item 3 test item 4 item 5 .90 item 6 Dr. Shahram Yazdani
12
Average Item Total Correlation
Definition: calculate correlation of each item scores with total score. Dr. Shahram Yazdani
13
Internal Consistency Reliability
Average Item-Total Correlation item 1 I1 I2 I3 I4 I5 I6 item 2 I1 I2 I3 I4 I5 I6 Total 1.00 item 3 test item 4 item 5 item 6 .85 Dr. Shahram Yazdani
14
Split-half Reliability
Definition: Randomly divide the test into two forms; calculate scores for Form A, B; calculate Pearson r as index of reliability Dr. Shahram Yazdani
15
Internal Consistency Reliability
Split-Half Correlations item 1 item 1 item 3 item 4 item 2 .87 item 3 test item 4 item 5 item 2 item 5 item 6 item 6 Dr. Shahram Yazdani
16
Cronbach’s alpha & Kuder-Richardson-20
Measures the extent to which items on a test are homogeneous; mean of all possible split-half combinations Kuder-Richardson-20 (KR-20): for dichotomous data Cronbach’s alpha: for non-dichotomous data Dr. Shahram Yazdani
17
Internal Consistency Reliability
Cronbach’s alpha () item 1 .87 item 1 item 3 item 4 item 2 item 5 item 6 .85 item 1 item 3 item 4 item 2 item 5 item 6 .91 item 1 item 3 item 4 item 2 item 5 item 6 item 2 item 3 test SH1 .87 SH2 .85 SH3 .91 SH4 .83 SH5 .86 ... SHn .85 item 4 item 5 item 6 = .85 Dr. Shahram Yazdani
18
Reducing Measurement Error
pilot test your instruments -- get feedback from respondents train your interviewers or observers make observation/measurement as unobtrusive as possible double-check your data triangulate across several measures that might have different biases Dr. Shahram Yazdani
19
Validity vs Reliability
Dr. Shahram Yazdani
20
Thank you ! Any Question ? Dr. Shahram Yazdani
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.