Download presentation
Presentation is loading. Please wait.
Published byChristal Quinn Modified over 9 years ago
1
Measurement Issues General steps –Determine concept –Decide best way to measure –What indicators are available –Select intermediate, alternate or indirect measures
2
Measurement Issues General steps –Consider limitations of measures selected –Collect or secure info/data –Summarize findings in writing
3
What is the relation between concepts, variables, instruments & measures?
4
Concepts Program is based on conceptual basis of why people behave the way they do
5
Why do you think people behave the way they do? Think of food and nutrition issues
6
Variables A theory has variables Variables define concepts Theory states how the variables interact or are related
7
Variables Variables of the theory are what you measure Variables are the verbal or written abstractions of the ideas that exist in the mind
8
Why should an intervention be based on a theory?
9
Why use theory? Know what you are to address in the intervention Makes evaluation easier Know what to measure to evaluate
10
Figure 6.1 A simple social learning theory model for reducing salt in the diet Comes next
11
Fig. 6.1 Social learning theory
12
Need measurements and instruments to assess changes in the variables of interest
13
Instruments Something that produces a measure of an object Series of questions to measure the variable, concept Includes instructions
14
Measures The numbers that come from the person answering questions on the instrument
15
Figure 6.2 Relation among models, variables, measures, and an instrument Comes next
16
Fig. 6.2
17
Based on why you think people behave the way the do, list possible variables to consider to measure this variable. What might be variables of the social learning theory?
18
What about variables that would verify if a change has or has not taken place?
19
Figure 6.1 A simple social learning theory model for reducing salt in the diet Comes next See how the program links with the theory & what measure
20
Fig. 6.1 Social learning theory
21
Reliability The extent to which an instrument will produce the same result (measure or score) if applied two different or more times.
22
Reliability X = T + E X is measure T is true value E is random error
23
Reliability Measurement error reduces the ability to have reliable and valid results.
24
Reliability Random error is all chance factors that confound the measurement. Always present Effects reliability but doesn’t bias results
25
Reliability Figure 6.5 Distribution of scores of multiple applications of a test with random error A is true score a is measure
26
Fig. 6.5 Distribution of scores of multiple applications of a test and random error
27
Distribution Can have the same mean with two different distributions Figure 6.6 next
28
Fig. 6.6 Two distributions of scores around the true mean
29
Which distribution has less variability? Which distribution has less random error?
30
Sources of Random Error Day-to-day variability Confusing instructions Unclear instrument Sloppy data collector
31
Sources of Random Error Distracting environment Respondents Data-management error
32
What can you do to reduce random error and increase reliability?
33
Variability & the Subject What you want to measure will vary from day to day and within the person
34
Variability & the Subject Intraindividual variability –variability among the true scores within a person over time
35
Figure 6.7 True activity scores (A, B, C) for 3 days with three measures (a, b, c) per day Comes next
36
Fig. 6.7 True activity (A, B, C) for 3 days with three measures (a, b, c) per day
37
Variability & the Subject Interindividual variability –variability between each person in the sample
38
Figure 6.8 Interindividual (A, X) and intraindividual (A1, A2, A3) variability for two people (A, X) in level of physical activity Comes next
39
Fig. 6.8 Interindividual (A, X) and intraindividual (A1, A2, A3) variability for two people (A, X) in level of physical activity
40
Assessing Reliability Need to know the reliability of your instruments Reliability coefficient of 1 is highest, no error Reliability coefficient of 0 is lowest, all error
41
Factors of Reliability Type of instrument –observer –self-report Times instrument applied –same time –different time
42
Figure 6.9 Types of reliability Comes next
43
Fig. 6.9 Types of reliability
44
Assessing Reliability Interobserver reliability –have 2 different observers rate same action at same time –reproducibility
45
Assessing Reliability Intraobserver reliability –1 observer assesses same person at two different times –video tape the action & practice
46
Assessing Reliability Repeat method –self-report or survey –repeat the same item/question at 2 points in survey
47
Assessing Reliability Internal consistency –average inter-item correlation among items in an instrument that are cognitively related
48
Assessing Reliability Internal consistency –Cronbach’s alpha –0.70 & above a good score
49
Assessing Reliability Test-retest reliability (internal consistency method) –same survey/test at 2 different times to same person
50
Validity Degree to which an instrument measures what the evaluator wants it to measure
51
Bias Systematic error that produces a systematic difference between an obtained score and the true score Bias threatens validity
52
Bias Figure 6.10 Distribution of scores of multiple applications of a test with systematic error Comes next
53
Fig. 6.10 Distribution of scores of multiple applications of a test with systematic error
54
What will basis do to your ability to make conclusions about your subjects?
55
Figure 6.11 Effect of bias on conclusions Comes next
56
Fig. 6.11 Effect of bias on conclusions
57
Types of Validity Face Content Criterion
58
Face Validity Describes the extent to which an instrument appears to measure what it is suppose to measure How many veg did you eat yesterday?
59
Content Validity Extent to which an instrument is expected to cover several domains of the content Consult a group of experts
60
Criterion Validity How accurate is a less costly way to measure the variable compared to the valid and more expensive instrument
61
What can lower validity? Guinea pig effect –awareness of being tested Role selection –awareness of being measured may make people feel they have to play a role
62
What can lower validity? Measurement as a change agent –act of measurement could change future behavior
63
What can lower validity? Response sets –respond in a predictable way that has nothing to do with the questions
64
What can lower validity? Interviewer effects –characteristics of the interviewer affects the receptivity and answers of the respondent
65
What can lower validity? Population restrictions –if people can’t use the method of data collection, can’t generalize to others
66
End of reliability and validity Questions Look at CNEP Survey
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.