Download presentation
Presentation is loading. Please wait.
Published byAlexander Dawson Modified over 9 years ago
1
Chapter 8 Developing Written Tests and Surveys Physical Fitness Knowledge
2
Test Planning Types Mastery Achievement
3
Table of Specifications Content Objectives history, values, equipment, etiquette, safety, rules, strategy, techniques of play Educational Objectives knowledge, comprehension, application, analysis, synthesis, evaluation
4
Table 8.1 Table of Specifications for a 60 Item Written Test on Badminton Educational objectives Content objectives
5
Test Characteristics How to measure When to test How many questions? What format should be used? What types of questions should be used?
6
Essay Questions When to use Weaknesses Scoring Objectivity Construction recommendations Scoring recommendations
7
Semi-Objective Questions When to use Weaknesses Construction recommendations Scoring recommendations
8
Objective Questions True/False Matching Multiple Choice Construction recommendations Scoring recommendations
9
Competencies Competency A correct B correct C correct D correct Total Motor development and motor leaning – Competency 1231613961 Fitness and fitness development/maintenance – Competency 22114131462 Lifetime individual, dual, and group activities – Competency 367101235 Fitness-related health, nutrition, and safety – Competency 41613151054 Affective development – Competency 5876526 Social development – Competency 6426517 Cognitive development – Competency 7888630 Physical education program – Competency 85691030 Learner assessment – Competency 941381843 Managing physical education classes – Competency 10111291345 Legal, ethical, medical, and safety concerns – Competency 11109141851 Total correct response 116107111120454 % Correct response by foil 26%24% 26%100%
10
Figure 8.1 The difference Between Extrinsic and Intrinsic Ambiguity
11
Administering the Written Test Before the test During the test After the test
12
Characteristics of Good Test Items Leave little to "chance" Reliable Relevant Valid Average difficulty Discriminate Gotten correct by more knowledgeable students Missed by less knowledgeable students Time consuming to write
13
Some Test-Taking Skills Review these important skills on pages 171-172 Preparing for the test Getting started and taking the test After taking the test
14
Quality of the Test Reliability and validity Overall test quality Individual item quality
15
Alpha Coefficient Recall the alpha coefficient from chapter 6
16
KR 20
17
Notice the Similarities KR 20 = alpha = These two ARE the same. Use alpha with continuous data. Use KR 20 with dichotomous data.
18
KR 21
19
Notice the Similarities KR 21 = KR 20 = alpha =
20
Notice the Similarities KR 20 = alpha = These two ARE the same. Use alpha with continuous data. Use KR 20 with dichotomous data.
21
Notice the Similarities KR 21 = KR 20 = KR 21 is a conservative estimate of the reliability. KR 20 will ALWAYS be greater than or equal to KR 21 but KR 21 is much easier to calculate.
22
Item Analysis Used to determine quality of individual test items Item difficulty Percent answering correctly Item discrimination How well the item "functions“ Also how “valid” the item is based on the total test score criterion
23
One Way to Organize Data for Item Analysis 1)Score the tests 2)Arrange answer sheets from high to low 3)Separate answer sheet into three subgroups a)upper 27% b)middle 46% c)lower 27% 4)Count and record responses per foil in upper group 5)Count and record responses per foil in lower group 6)See figure 10.3
24
Item Difficulty
25
Item Discrimination
26
Item Difficulty Item Discrimination
27
Figure 8.3—Item #5 Responses ABCD*EOmitDiff.Net D Upper 27% = 50 2821190 36%4% Lower 27% = 50 2481170
28
Figure 8.3—Item #25 Responses ABCD*EOmitDiff.Net D Upper 27% = 300 691052160 53%37% Lower 27% = 300 8952541041
29
Figure 8.4 The Relationship Between Item Discrimination and Difficulty
30
Maximum, Minimum, and Desired Item Difficulty and Item Discrimination DifficultyDiscrimination Maximum100 Minimum0-100 Desired50100
31
Sources of Written Tests Professionally constructed tests Textbooks Periodicals, theses, and dissertations
32
McGee & Farrow
33
Item Analysis Practice 27% = 50 people Item #UpperLowerDifficultyDiscrimination 13221 21812 34021 45040 53642 625 71535 8 15
34
Item Analysis Practice 27% = 50 people Item #UpperLowerDifficultyDiscrimination 1322153%22% 2181230%12% 3402161%38% 4504090%20% 5364278%-12% 625 50%0% 7153550%-40% 8351550%40%
35
Each Item Was Completed By the Same Number of People Complete the chart below by filling in the empty boxes ItemUpper correct Lower correct DifficultyDiscrimination 110100% 2550% 3565%
36
Each Item Was Completed By the Same Number of People Complete the chart below by filling in the empty boxes ItemUpper correct Lower correct DifficultyDiscrimination 110 100%0% 210575%50% 35865%-30%
37
Questionnaires Determine the objectives Delimit the sample Construct the questionnaire Conduct a pilot study Write a cover letter Send the questionnaire Follow-up with non-respondents Analyze the results and prepare the report
38
Constructing Open-Ended Questions Advantages Allow for creative answers Allow for respondent to detail answers Can be used when possible categories are large Probably better when complex questions are involved Disadvantages Analysis is difficult because of non-standard responses Require more respondent time to complete Can be ambiguous Can result in irrelevant data
39
Constructing Closed-Ended Questions Advantages Easy to code Result in standard responses Usually less ambiguous Ease of response Disadvantages Frustration if correct category is not present Respondent may chose inappropriate category May require many categories to get ALL responses Subject to possible recording errors
40
Factors Affecting the Questionnaire Response Cover letter Be brief and informative Ease of return You DO want it back! Neatness and length Be professional and brief Inducements Money and flattery Timing and deadlines Time of year and sufficient time to complete Follow-up At least once
41
The BIG Issues in Questionnaire Development Reliability Consistency of measurement Validity Truthfulness of response Representativeness of the sample To whom can you generalize?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.