Download presentation
Presentation is loading. Please wait.
1
COM 633: Content Analysis Reliability
Kimberly A. Neuendorf, Ph.D. Cleveland State University Fall 2010
2
Reliability Generally—the extent to which a measuring procedure yields the same results on repeated trials (Carmines & Zeller, 1979) Types: Test-retest: Same people, different times. Intracoder reliability. . . Alternative-forms: Different people, same time, different measures. Internal consistency: Multiple measures, same construct. Inter-rater/Intercoder: Different people, same measures.
3
Index/Scale Construction
Similar to survey or experimental work e.g., Bond analysis—Harm to female, sexual activity Need to check internal consistency reliability (e.g., Cronbach’s alpha)
4
Intercoder Reliability
Defined: The level of agreement or correspondence on a measured variable among two or more coders What contributes to good reliability? careful unitizing, codebook construction, coder training (training, training!)
5
Reliability Subsamples
Pilot and Final reliability subsamples Because of drift, fatigue, experience Selection of subsamples Random, representative subsample “Rich Range” subsample Useful for “rare event” measures Reliability/variance relationship
6
Reliability Statistics - 1
Types Agreement Agreement beyond chance Covariation Core assumptions of coefficients “More scholarship is needed”—these coefficients have not been assessed!
7
Reliability Statistics - 2
My recommendations Do NOT use percent agreement ALONE Nominal/Ordinal: Kappa (Cohen’s, Fleiss’) Interval/Ratio: Lin’s concordance Calculated via PRAM Reliability analyses as diagnostics, e.g., Problematic variables, coders (“rogues”?), variable/coder interactions Confusion matrixes (categories that tend to be confused)
8
PRAM: Program for Reliability Analysis with Multiple Coders
Written by rocket scientists! Trial version available from Dr. N!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.