Download presentation
Presentation is loading. Please wait.
Published byBilal Jacobi Modified over 9 years ago
1
EMR 6500: Survey Research Dr. Chris L. S. Coryn Kristin A. Hobson Spring 2013
2
Agenda Constructing open- and closed-ended items Case Study #3
3
Open-Ended Questions
4
Open-Ended Requests for Numerical Responses General guidelines 1.Ask for the specific unit desired in the question stem 2.Provide answer spaces that are sized appropriately for the response task 3.Provide units labels with the answer spaces
5
Open-Ended Numeric
6
Open-Ended Requests for a List of Items General guidelines 1.Specify the number and type of responses desired in the question stem 2.Design the answer spaces to support the number and type of responses desired 3.Provide labels with answer spaces to reinforce the type of response requested
7
Open-Ended Lists
8
Open-Ended Requests for Description and Elaboration General guidelines 1.Provide extra motivation to respond 2.Provide adequate space for respondents to completely answer the question 3.Use scrollable boxes on internet surveys 4.Consider programming probes to open- ended responses in internet surveys
9
Open-Ended Description and Elaboration
10
Closed-Ended Questions
11
General Guidelines 1.State both positive and negative sides in the question stem when asking either/or types of questions 2.Develop lists of answer categories that include all reasonable possible answers 3.Develop lists of answer categories that are mutually excusive 4.Maintain spacing between answer categories that is consistent with measurement intent
12
Positive and Negative Sides in Question Stem
13
Exhaustive and Mutually Exclusive Questions
14
Spacing Response Options Evenly
15
Closed-Ended Questions: Nominal Scales General guidelines 1.Ask respondents to rank only a few items at once rather than a long list 2.Avoid bias from unequal comparisons 3.Randomize response options if there is concern about order effects 4.Use forced-choice questions rather than check-all-that-apply questions 5.Consider using differently shaped answer spaces (circles and squares) to help respondents distinguish between single- and multiple-answer questions
16
Closed-Ended Unordered
17
Ranking: Pairwise Comparisons In ranking questions, it is typically better to present respondents with sets of paired comparisons so that they are only comparing two concepts at a time until all options have been compared to one another
18
Bias from Unequal Comparisons
19
Check-all-that-Apply versus Forced-Choice
20
Distinguishing Between Single- and Multiple-Answer Questions
21
Closed-Ended Questions: Ordinal Scales General guidelines 1.Choose an appropriate scale length—in general, limit scales to four or five categories 2.Choose direct or construct-specific labels to improve cognition 3.Provide scales that approximate the actual distribution of the characteristic in the population 4.Provide balances scales where categories are relatively equal distances apart conceptually 5.Consider how verbally labeling and visually displaying all response categories may influence answers 6.Carefully evaluate the use of numeric labels and their impact on measurement 7.Align response options vertically in one column or horizontally in one row and strive for equal distance between categories 8.Place nonsubstantive options at the end of the scale and separate them from substantive options
22
Polarity in Ordinal Scales Unipolar ordinal scales measure gradation along one dimension where the zero point falls at one end of the scale Bipolar ordinal scales measure gradation along two opposite dimensions with the zero point falling in the middle of the scale – Level (e.g., satisfaction, importance) – Direction (i.e., positive, negative)
23
Scalar Questions
24
Construct-Specific Scales
25
Balanced Scales with Even Distance Between Categories
26
Fully Labeled and Point-Polar Labeled Scales
27
Alignment of Response Options
28
Primacy and Recency Primacy occurs when respondents are more likely to select the first option when a scale is presented visually Recency effects occur when respondents are more likely to select the last option when a scale is presented aurally
29
Aligning Conceptual and Visual Midpoints
30
Other Types of Closed-Ended Scales
31
Method of Equal-Appearing Intervals One of Thurstone’s methods of scaling 1.Determine attitude object to be measured 2.Construct a set of statements about the attitude toward the object that captures an entire range of opinions, from extremely favorable, to neutral, to extremely unfavorable 3.Typically requires 40-50 statements 4.Judges rate favorability values for statements using an 11-point scale 5.Statements are sorted into 11 physical piles (Q-sort) 6.Measures of central tendency are used to construct values for each statement 7.The final scale is constructed by randomly organizing final statements into a checklist 8.Scale scores are calculated by taking the mean of endorsed statements
32
Thurstone Scale
33
Semantic Differential Osgood’s method of scaling based on connotative meaning of words, which refers to the associations of meaning that are attached to a word that are not part of its formal definition (denotative meaning) Opposing adjective pairs are used to assess three underlying dimensions of connotative meaning 1.Evaluation, which reflects the good-bad continuum of meaning underlying words (e.g., good-bad, valuable-worthless) 2.Potency, which reflects the strong-weak continuum (e.g., large-small, strong-weak) 3.Activity, which reflects the active-passive continuum (e.g., fast-slow, hot-cold)
34
Semantic Differential Response Formats
35
Case Study #3
36
Case Study Activity 1.Construct three questions of any type that reflect the concept/ construct of ‘course quality’ 2.Present your questions and response options to the class and discuss how are your questions ‘better’ than the standardized questions – The group with the ‘best’ questions will receive 5 extra credit points, the second best will receive 4 extra credit points, and so forth
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.