Chapter 8 Validity and Reliability
Validity How well can you defend the measure? –Face V –Content V –Criterion-related V –Construct V
Face Validity Does instrument look like valid? –On a survey or questionnaire, the questions seem to be relevant –On a checklist, the behaviors seem relevant –For a performance test, the task seems to be appropriate
Content Validity The content of the test, the measure, is relevant to the behavior or construct being measured An expert judges or a panel of experts judge the content
Criterion Related Using a another independent measure to validate a test –Typically computing a correlation Two types –Predictive validity –Concurrent validity
Criterion-Related Predictive ACT achievement test Correlated with College GPA Concurrent Coopersmith Self-esteem Scale Correlated with teacher’s ratings of self-esteem
Construct Validity Construct – attempt to describe, name an intangible variable Use many different measures to validate a measure Self-esteem – construct –Instrument measure
Construct Validity Self-esteem – construct –Instrument measure e.g. coopersmith –Correlated it with: Behavioral checklist Teacher’s comments Another accepted instrument for Self-esteem A measure of confidence Locus of control measure
Reliability For an instrument – –Consistency of scores from use to use Types of reliability coefficients –Test – retest –Equivalent forms –Internal consistency Split-half Alpha coefficient (Cronbach alpha)
Reliability Coefficient Value ranges from 0 to considered the minimal acceptable.90 is very good.60 is sometimes acceptable but is really not very good Lower than.60 definitely unacceptable
End chapter 8