Download presentation
Presentation is loading. Please wait.
Published byRosa Peters Modified over 9 years ago
1
Evaluating Survey Items and Scales Bonnie L. Halpern-Felsher, Ph.D. Professor University of California, San Francisco
2
Question Evaluation Involves How well questions are understand and how easy they are to answer Affects quality of measurement How well do answers relate to what we are actually trying to measure –Validity and reliability
3
Standards for Survey Items/Questions Content – are the questions asking the right things? Cognitive – do participants understand and are they able and willing to answer Usability – can the questions be completed easily?
4
Ways to Test Expert review – have experts in the field review the items and scale to ensure they meet the criteria needed –Great for perspective on what researcher needs, but might not tell about the best way to answer most accurately
5
Ways to Test Focus groups – hold discussions with people in the sample of interest to inquire as to what terms are used, whether the questions make sense, and whether the content is complete –Great for knowing what people in general think, but not individuals
6
Ways to Test Cognitive interviews – pilot test the survey and ask questions to samples about what the questions meant, why they gave certain answers, and whether they would change something –Great to tell how individuals understand questions, but might not be generalizable
7
Looking For: Interviewer: Reads question as worded vs with slight changes vs meaning is altered
8
Looking For: Respondent: –Interrupts –Asks for clarification –Gives adequate answer –Gives inadequate answer –Answers “I don’t know” –Refuses to answer
9
Ways to Test Field pretests – pilot test in same manner and to same sample as will do for real survey administration; see what distribution of answers were given; hold debriefings; determine best administration method –Tells how instrument and procedures work under real circumstances and to get summary data; but not flexible to probe and test variations
10
Ways to Test Randomized or split-ballot experiments – randomly assign different sets of items or wording of items to different groups and compare
11
Reliable & Valid Measures Reliability: Answers provided are consistent. Validity: Responses relate to some truth concerning what we are trying to describe. Items need to be consistently understood, administered, and communicated.
12
Reliability
13
Validity Translation Validity –Does the measurement of the construct reflect well the construct of interest Face validity Content validity
14
Validity Criterion-related validity –Does the measure behave the way it should given the theory or construct Predictive validity Concurrent validity Convergent validity Discriminant validity
15
Translation Validity: Face Validity Does the measure “on the surface” or “on the face” seem like a good representation of the construct?
16
Translation Validity: Content Validity Ensuring that your measure taps into each part of the construct that is necessary to measure
17
Criterion-Related Validity: Predictive Validity Determine whether the measure can predict something it should predict –E.g., doing well on a certain math exam should predict achievement in engineering
18
Criterion Validity: Concurrent Validity Can the measure distinguish between groups that it should be able to distinguish between –E.g., assessment of manic-depression should differentiation between manic-depressive patients and schizophrenic patients
19
Criterion Validity: Convergent Validity Degree to which the measure is similar to (converges with) other similar constructs –E.g., a new measure of IQ would correlate highly with the Stanford Binet IQ test
20
Convergent Validity: Discriminant Validity Degree to which the measure is NOT similar to (diverges from) other constructs not expected to be similar –E.g., measure of math skills should not be correlated with measure of literacy skills
21
Reliability Inter-rater or inter-observer Test-retest Parallel Forms Internal Consistency
22
Inter-rater Reliability Degree to which different raters or observers give consistent estimates of the same issue or phenomenon
23
Test-Retest Reliability Give same test to same people twice and see how much they correlate
24
Parallel-Forms Reliability Have 2 related forms and see how they correlate
25
Internal Consistency Give measure to a sample and estimate how well the items reflect the same construct
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.