Reliability Analysis
Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and Interpreting Internal Consistency
What is Reliability? Extent to which results are consistent Validity is the extent to which the instrument measures what it claims to measure. A good measurement instrument is both reliable and valid. Reliability is a prerequisite for validity.
Ways to Measure Reliability Test-Retest Parallel (Equivalent) Forms Internal Consistency
Interpreting Test-Retest and Parallel Forms Reliability Measured with correlation coefficient (Pearson r) between halves or between tests Generally an r of.7-.8 is considered good reliability, but it depends on what else is available.
Internal Consistency Reliability Consistency of items within a measurement instrument Split-half - divide test items into two groups and obtain a score for each half; correlate the scores Cronbach’s alpha - average of all possible split-half estimates
Assumptions For Internal Consistency Reliability Equivalent halves or items Unrelated measurement errors between halves or items Items represent the same underlying factor Items have been transformed if necessary
Interpreting Internal Consistency Generally an alpha of or higher is considered good reliability. Look for items which, if removed, would substantially improve the reliability. It may be necessary to retest and do another reliability analysis to confirm.
Take-Home Points Reliability is a prerequisite for validity. Correlations and correlation-based statistics are often used as indexes of reliability.