Download presentation
Presentation is loading. Please wait.
1
Catching Up: Review
2
Measurement: “Rules for assigning numbers to objects
(or concepts) to represent quantities of attributes.”
3
Measurment Names and symbols are arbitrary.
But to be a true number scale the symbols must follow some logical and systematic arrangement.
4
Numbers can be assigned using… Scales:
“A scale is the continuum upon which measurements are located.” Some very common scales:
5
Scales: Likert Scale Is a statement (not a question) followed by five categories of agreement.
6
Scales: Likert Scale Ice cream is good for breakfast.
1. Strongly disagree 2. Disagree 3. Neither agree nor disagree 4. Agree 5. Strongly agree
7
Scales: Likert Scale
8
Scales: Likert Scale
9
Scales: Likert-like
10
Scales:
11
Typically: Opposite adjectives
Scales: Semantic scales: Typically: Opposite adjectives separated by 7 selection points.
12
Scales:
13
Semantic scales:
15
Semantic scales:
16
Hybrid Scales:
17
Measurement Characteristics:
18
Measurement characteristics:
Y = x(true) + x(sy-error) + x(random)
19
Measurement characteristics:
Y = x(true) + x(sy-error) + x(random) Systematic error can be eliminated.
20
Measurement characteristics:
Y = x(true) + x(sy-error) + x(random) Random error cannot be eliminated.
21
Measurement characteristics:
Y = x(true) + x(sy-error) + x(random) If a sample is taken to estimate an answer: another form of error is added……
22
Measurement characteristics:
This is called a Sampling Error Y = x(true) + x(sy-error) + x(random) + x(sampling error) If you take a sample… you will create a sampling error!
23
WHY? You and a friend (in the same class) take the
same exam at the same time and get different grades. WHY?
24
Take a piece of paper… write down five different reasons why these two friends taking the same class would get different grades... What then did the grade actually measure? Write down a definition of a “grade.” If you suggested that a “grade” is a measurement of what a student knows, how many “grades” would you suggest need to be taken in order to be confident that the student actually knows what the grades indicate that they know?
25
Measurement characteristics:
Reliability
26
Measurement characteristics:
Validity Before validity can be established, it is necessary to show that measurements have reliability. A measurement can be reliable without being valid, but it cannot be judged to be valid without reliability.
27
Measurement characteristics:
Reliability Stability
28
Measurement characteristics:
Reliability Stability Test-retest Equivalent forms
29
Measurement characteristics:
Reliability Stability Test-retest Equivalent forms 2. Equivalence
30
Measurement characteristics:
Reliability Stability Ttest-retest Equivalent forms 2. Equivalence a. Kuder-Richardson b. Cronbach’s Alpha
31
Measurement characteristics:
Reliability Stability Test-retest Equivalent forms 2. Equivalence a. Kuder-Richardson b. Cronbach’s Alpha Lee Cronbach
32
Measurement characteristics:
Reliability Stability Test-retest Equivalent forms 2. Equivalence a. Kuder-Richardson b. Cronbach’s Alpha Learn, Effective, & Like the instructor
33
Measurement characteristics:
Reliability Stability Test-retest Equivalent forms 2. Equivalence a. Kuder-Richardson b. Cronbach’s Alpha 3. Inter-rater Consistency a. Krippendorff’s Alpha Klaus Krippendorff 1932 -
34
the true variance over the total variance created by the instrument.
The reliability coefficient is simply a proportion. It is the ratio of the true variance over the total variance created by the instrument. If rxx = 0.70, then 70% of the variance of the instrument can be seen as being created by what was being measured, and 30% by everything else.
35
Measurement characteristics:
If a measurement is reliable, it may be valid: But there are many ways that a measurement could be valid or invalid.
36
Measurement characteristics:
Validity Face validity
38
Measurement characteristics:
Validity Face Content
40
Measurement characteristics:
Validity Face Content Criteria
41
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive
43
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct
44
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct
46
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct a. Convergent
47
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct a. Convergent b. Divergent
48
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct a. Convergent b. Divergent c. Discriminant
49
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct a. Convergent b. Divergent c. Discriminant d. Nomological
50
Measurement characteristics:
Validity Face Content Criteria a. Concurrent b. Predictive 4. Construct 5. Utilitarian (?) A measurement may satisfy a utilitarian goal independently of any validity of the actual measurement.
51
Hypothetical Constructs!
It is very important that we can measure Hypothetical Constructs!
52
HOW it is measured. In practical terms, a construct is defined by
This is called an: “Operational Definition.”
53
That depends upon how it is measured!
Is an electron a particle or Is it a wave? That depends upon how it is measured!
55
Percy W. Bridgman Operationalization is the process of defining a fuzzy concept so as to make the concept measurable in form of variables consisting of specific observations. In a wider sense it refers to the process of specifying the extension of a concept.
56
Intelligence could be defined as how fast (RT)
a person solves a puzzle….
57
Or intelligence could be defined as not
getting cheated at the car dealership.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.