Download presentation
1
Part 2: Quantitative Methods
October 2, 2006
2
Notice where stats fits into this ONE process
What stats can and cannot do to address a research question.
3
Sampling
4
Target vs. Accessible Population
High School Sports Officials Students enrolled in 5th grade in NM schools Albuquerque Residents The U.S. Electorate National Association of Sports Officials Membership New Mexico 5th graders Albuquerque Phone Book Registered Voters
5
volunteers Population Validity The Inferential size of sample Leap
Simple Random Sampling volunteers Population Validity Systematic Sampling Sampling Random Sampling Stratified Error Processes Procedures Sampling Cluster Sampling Pros Selection Biased Convenience and and Sampling Sampling Cons Assignment
6
The Inferential Leap
7
Simple Random Sampling
1. Andrea 2. Tina 3. Paul 4. Wilbur 5. Sandra 6. Kathy 7. Jim 8. George 9. Emir 10. Becky 11. Sharon 12. Gladys 13. Jose 14. Bill 15. Sue 16. Erica 17. Aaron 18. Fred 19. Pam 20. Roger
8
Stratified Random Sampling
6. Paul 7. Wilbur 8. Jim 9. George 10. Emir 1. Andrea 2. Tina 3. Kathy 4. Sandra 5. Becky 6. Sharon 7. Gladys 8. Sue 9. Erica 10. Pam 1. Jose 2. Bill 3. Aaron 4. Fred 5. Roger
9
Systematic Sampling Say you have a target population that has a 100,000 members. And: A list is available. You need 1,000 cases for your sample. 100,000/1000 = 100. Select a random number from table. Then select every 100th case.
10
Cluster Sampling Naturally occurring groups. Multistage sampling.
State, district, school, classroom, student. Randomly sample from one level then survey, interview, etc. Multistage sampling. Randomly select from one level. Then randomly select within that level.
11
Convenience Sampling Why convenient?
Sample that is located near the researcher Connections with administrator or staff Researcher is familiar with the setting Data is already available Shortcomings of convenience samples?
12
Volunteers in Sampling
How might volunteers differ? Children having parental permission More academically competent More popular with peers More physically attractive Less likely to smoke or use drugs More likely to be white More likely to come from two-parent household More likely to be involved in extracurricular activities Less likely to be socially withdrawn Less likely to be aggressive
13
Size of the Sample Bigger is (usually) better.
Unless? How big is big? Power analysis. Practical issues. Attrition. Reliability. Cost/ benefit.
14
Correlation & Instrumentation
Reliability and Validity
15
Correlation Coefficients
Pearson product-moment correlation The relationship between two variables of degree. Positive: As one variable increases (or decreases) so does the other. Negative: As one variable increases the other decreases. Magnitude or strength of relationship -1.00 to +1.00 Correlation does not equate to causation
16
Positive Correlation
17
Negative Correlation
18
No Correlation
19
Correlations Thickness of scatter plot determines strength of correlation, not slope of line. For example see: Remember correlation does not equate to causation.
20
Negative Correlation
21
Operationism vs. Essentialism
According to Stanovich What are they? How do they differ?
22
Essentialist Operationism Like to argue about the meaning of our terms
“What does the theoretical concept really mean?” Must have a complete and unambiguous understanding of the language involved. Operationism Link concepts to observable events that can be measured. Concepts in science related to a set of operations. Several slightly different tasks and behavioral events are used to converge on a concept.
23
Validity and Reliability
Validity is an important consideration in the choice of an instrument to be used in a research investigation It should measure what it is supposed to measure Researchers want instruments that will allow them to make warranted conclusions about the characteristics of the subjects they study Reliability is another important consideration, since researchers want consistent results from instrumentation Consistency gives researchers confidence that the results actually represent the achievement of the individuals involved
24
Reliability Test-retest reliability Inter-rater reliability
Parallel forms reliability Internal consistency (a.k.a. Cronbach’s alpha)
25
Validity Face Content Construct
Does it appear to measure what it purports to measure? Content Do the items cover the domain? Construct Does it measure the unobservable attribute that it purports to measure?
26
Validity Criterion Predictive Concurrent Consequential
27
Types of validity (cont.)
Here the instrument samples some and only of the construct
28
Types of validity Here the instrument samples all and more of the construct
29
The construct Here the instrument fails to sample ANY of the construct The instrument
30
The construct Here the instrument samples some but not all of the construct The instrument
31
Perfection!
32
Reliability and Validity
33
In groups of 3 to 4 Sampling Measurement
What is the target population? What sampling procedure was used? Do you think the sample is representative? Why or why not? Measurement What types of reliability and validity evidence are provided? What else would you like to know?
34
Ways to Classify Instruments
Who Provides the Information? Themselves: Self-report data Directly or indirectly: from the subjects of the study From informants (people who are knowledgeable about the subjects and provide this information)
35
Types of Researcher-completed Instruments
Rating scales Interview schedules Tally sheets Flowcharts Performance checklists Observation forms
36
Excerpt from a Behavior Rating Scale for Teachers
Instructions: For each of the behaviors listed below, circle the appropriate number, using the following key: 5 = Excellent, 4 = Above Average, 3 = Average, 2 = Below Average, 1 = Poor. A. Explains course material clearly. B. Establishes rapport with students. C. Asks high-level questions. D. Varies class activities.
37
Excerpt from a Graphic Rating Scale
Instructions: Indicate the quality of the student’s participation in the following class activities by placing an X anywhere along each line. Always Frequently Occasionally Seldom Never 1. Listens to teacher’s instructions. Always Frequently Occasionally Seldom Never 2. Listens to the opinions of other students. Always Frequently Occasionally Seldom Never 3. Offers own opinions in class discussions.
38
Sample Observation Form
39
Discussion Analysis Tally Sheet
40
Performance Checklist Noting Student Actions
41
Types of Subject-completed Instruments
Questionnaires Self-checklists Attitude scales Personality inventories Achievement/aptitude tests Performance tests Projective devices
42
Example of a Self-Checklist
43
Example of Items from a Likert Scale
44
Example of the Semantic Differential
45
Pictorial Attitude Scale for Use with Young Children
46
Sample Items from a Personality Inventory
47
Sample Items from an Achievement Test
48
Sample Item from an Aptitude Test
49
Sample Items from an Intelligence Test
50
Item Formats Questions used in a subject-completed instrument can take many forms but are classified as either selection or supply items. Examples of selection items are: True-false items Matching items Multiple choice items Interpretive exercises Examples of supply items are: Short answer items Essay questions
51
Unobtrusive Measures Many instruments require the cooperation of the respondent in one way or another. An intrusion into an ongoing activity could be involved which causes a form of negativity within the respondent. To eliminate this, researchers use unobtrusive measures, data collection procedure that involve no intrusion into the naturally occurring course of events. In most cases, no instrument is used, however, good record keeping is necessary. They are valuable as supplements to the use of interviews and questionnaires, often providing a useful way to corroborate what more traditional data sources reveal.
52
Norm-Referenced vs. Criterion-Referenced Instruments
All derived scores give meaning to individual scores by comparing them to the scores of a group. The group used to determine derived scores is called the norm group and the instruments that provide such scores are referred to as norm-referenced instruments. An alternative to the use of achievement or performance instruments is to use a criterion-referenced test. This is based on a specific goal or target (criterion) for each learner to achieve. The difference between the two tests is that the criterion referenced tests focus more directly on instruction.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.