Reliability Analysis
Overview of Reliability What is Reliability? Ways to Measure Reliability Interpreting Test-Retest and Parallel Forms Measuring and Interpreting Internal Consistency
What is Reliability? Extent to which results are consistent Validity is the extent to which the instrument measures what it claims to measure. A good measurement instrument is both reliable and valid. Reliability is a prerequisite for validity.
Ways to Measure Reliability Test-Retest Parallel (Equivalent) Forms Internal Consistency
Interpreting Test-Retest and Parallel Forms Reliability Measured with correlation coefficient (Pearson r) between halves or between tests Generally an r of.7-.8 is considered good reliability, but it depends on what else is available.
Internal Consistency Reliability Consistency of items within a measurement instrument Split-half - divide test items into two groups and obtain a score for each half; correlate the scores Cronbach’s alpha - average of all possible split-half estimates
Assumptions For Internal Consistency Reliability Equivalent halves or items Unrelated measurement errors between halves or items Items represent the same underlying factor Items have been transformed if necessary
Interpreting Internal Consistency Generally an alpha of or higher is considered good reliability. Look for items which, if removed, would substantially improve the reliability. It may be necessary to retest and do another reliability analysis to confirm.
Review Question! What is the difference between test-retest and parallel forms reliability?
Choosing Stats Participants are shown photos of individuals who differ in two ways: they either do or not have a noticeable facial scar, and they either do or not have a nose ring. Each participant rates all four types of photos on how likely they would be to offer the person in the photo a job. The researcher wants to know whether the effect of having a facial scar depends on whether or not the person has a nose ring.