Download presentation
1
Chapter 5 Selecting Measuring Instruments Gay and Airasian
Educational Research Chapter 5 Selecting Measuring Instruments Gay and Airasian
2
Collecting Data The collection of data is an extremely important part of all research endeavors, for the conclusions for a study are based on what the data reveal. As a result, the kind (s) of data to be collected, the method (s) of collection to used, and the scoring of the data need to be considered with care.
3
“Data” In this chapter: Define data
Present several types of instruments that can be used to collect data in a research study Different properties that scores are assumed to possess
4
Objectives: By the end of this chapter you should be able to:
1) Explain what is meant by the term “data” 2) Explain what is meant by the term ‘instrumentation” 3( Name three ways in which data can be collected by researchers Explain what is meant by the term “data-collection instrument” Describe five types of researcher-completed instruments used in educational research Describe five types of subject-completed instruments used in educational research
5
Objectives Explain what is meant by the term ‘unobtrusive measures” and give two examples of such measures Name four types of measurement scales and give an example of each Name three different types of scores used In educational research and give an example of each
6
Objectives: Describe briefly the difference between norm-referenced and criterion-referenced instrument Describe how to score, tabulate, and code data for analysis
7
Flow of Activities in Collecting Data
Example Identify the variable Operationally define the variable Locate data (measures, observations, documents with questions and scales) Collect data on instruments yielding numeric scores Self-efficacy for learning from others Level of confidence that an individual can learn something by being taught by others 13 items on a self-efficacy attitudinal scale from Bergin (1989) Scores of each item ranged from 0-10 with 10 being “completely confident.”
8
Data Collection Scientific and disciplined inquiry requires the collection, analysis, and interpretation of data Data – the pieces of information that are collected to examine the research topic Issues related to the collection of this information are the focus of this chapter
9
Data Collection Terminology related to data
Constructs – abstractions that cannot be observed directly but are helpful when trying to explain behavior Intelligence Teacher effectiveness Self esteem
10
Identify Data Options: Specify Variables
Independent Variables Dependent Variables Intervening Variables Control Moderating Confounding
11
Identify Data Options: Operationalize Variables
Operational Definition: The specification of how the variable will be defined and measured typically based on the literature often found in reports under “definition of terms” Sometimes the researcher must construct it
12
8 Some Times When Operational Definitions Would Be Helpful Figure 2.2
13
7 Which of the Following Definitions Are Operational?
Page 34 1. As shown by enthusiasm in class 2. As judged by the student’s math teacher using a rating scale she developed 3. As measured by the “Math Interest” questionnaire 4. As shown by attention to math tasks in class 5. As reflected by achievement in mathematics 6. As indicated by records showing enrollment in mathematics electives 7. As shown by effort expended in class 8. As demonstrated by number of optional assignments completed 9. As demonstrated by reading math books outside class 10. As observed by teacher aides using the “Mathematics Interest” observation record 7
14
Data Collection Data terminology (continued)
Operational definition – the ways by which constructs are observed and measured Weschler IQ test Virgilio Teacher Effectiveness Inventory Tennessee Self-Concept Scale Variable – a construct that has been operationalized and has two or more values
15
WHAT ARE DATA? The term "data" refers to the kinds of information researchers obtain on the subjects of their research. The term "instrumentation" refers to the entire process of collecting data in a research investigation.
16
KEY QUESTIONS An important consideration in the choice of an instrument to be used in a research investigation is validity: the extent to which results from it permit researchers to draw warranted conclusions about the characteristics of the individuals studied.
17
CONDITIONS 4. Who? --administration of the instruments
It involves not only the selection or design of the instruments but also the conditions under which the instruments will be administered. 1. Where? -- location 2. When? - - Time 3. How often?- - Frequency 4. Who? --administration of the instruments How you answer these questions may affect the data obtained!
18
Good Instruments? Every instrument if it is of any value must allow
The data provided by any instrument may be affected by nay or all of the preceding considerations If administered incorrectly, disliked Noisy or inhospitable conditions Subjects are exhausted Every instrument if it is of any value must allow researchers to draw accurate conclusions about the capabilities or other characteristics of the people being studied
19
VALIDITY, RELIABILITY, AND OBJECTIVITY
An important consideration in the choice of an instrument to be used in a research investigation is validity: the extent to which results from it permit researchers to draw warranted conclusions about the characteristics of the individuals studied.
20
Reliability and Objectivity
2) A reliable instrument is one that gives consistent results. 3) Whenever possible, researchers try to eliminate subjectivity from the judgments they make about the achievement, performance, or characteristics of subjects. That is, the researchers try to be objective.
21
USABILITY Is it easy to use? How long will it take to administer?
Are directions clear? Is it appropriate for the ethnic group or other groups to whom it will be administered? How easy is it to score? To interpret the scores?
22
Practical Questions Save time, energy and headaches!!!
How much does it cost? Do equivalent forms exist? Have any problems been reported? Does Evidence of its reliability and validity exist? Save time, energy and headaches!!!
23
Who Provides the Information
Research instruments can be classified in many ways. Some of the more common are in terms of who provides the data, the method of data collection, who collects the data, and what kind of response they require from the subjects.
24
Data Obtained Research data are data obtained by directly or indirectly assessing the subjects of the study. Self-report data are data provided by the subjects of the study themselves. Informant data are data provided by other people about the subjects of a study.
25
Researcher Instruments
Many types of researcher-completed instruments exist. Some of the more commonly used are rating scales, interview schedules, tally sheets, flowcharts, performance checklists, anecdotal records, and time-and-motion logs.
26
Subject Instruments The types of items or questions used in subject-completed instruments can take many forms, but they all can be classified as either selection or supply items.
27
Subject Instruments The types of items or questions used in subject-completed instruments can take many forms, but they all can be classified as either selection or supply items.
28
Subject Instruments There are also many types of instruments that are completed by the subjects of a study rather than the researcher. Some of the more commonly used of this type are questionnaires; self-checklists; attitude scales; personality inventories; achievement, aptitude, and performance tests; projective devices; and socimetric devices.
29
Subject Instruments (con.t)
Examples of selection items include true-false items, multiple-choice items, matching items, and interpretive exercises. Examples of supply items include short-answer items and essay questions.
30
Where Did the Instruments come From?
1) Find and administer a previously existing instrument of some sort, or 2) administer an instrument the researcher personally developed or had developed by someone else An excellent source for locating already available tests is the ERIC Clearinghouse on Assessment and Evaluation.
31
Data Collection Measurement scales Nominal – categories
Gender, ethnicity, etc. Ordinal – ordered categories Rank in class, order of finish, etc. Interval – equal intervals Test scores, attitude scores, etc. Ratio – absolute zero Time, height, weight, etc.
32
50 Four Types of Measurement Scales SCALE Nominal Interval Ratio
Figure 7.25 50 SCALE Nominal Interval Ratio Ordinal EXAMPLE Gender Temperature (in Fahrenheit) Money Position in race
33
An Ordinal Scale: The Winner of a Horse Race
Figure 7.27 51
34
Data Collection Types of variables Categorical or continuous
Categorical variables reflect nominal scales Continuous variables reflect ordinal, interval or ratio scales Independent or dependent Independent variables are the purported causes Dependent variables are the purported effects
35
Measurement Instruments
Types of instruments (continued) Affective (continued) Scales used for responding to items on affective tests Likert Semantic differential Thurstone Guttman Rating scales
36
Examples of Items from a Likert Scale Measuring Attitude toward Teacher Empowerment
Figure 7.14 Instructions: Circle the choice after each statement that indicates your opinion. 1. All professors of education should be required to spend at least six months teaching at the elementary or secondary level every five years. Strongly Strongly agree Agree Undecided Disagree disagree (5) (4) (3) (2) (1) 2. Teachers’ unions should be abolished. (1) (2) (3) (4) (5) 3. All school administrators should be required by law to teach at least one class in a public school classroom every year. 44
37
45 Example of the Semantic Differential Figure 7.15
Instructions: Listed below are several pairs of adjectives. Place a checkmark (a) on the line between each pair to indicate how you feel. Example Hockey: exciting :_____:_____:_____:_____:_____:_____:_____:_____: dull If you feel that hockey is very exciting, you would place a check in the first space next to the word “exciting.” If you feel that hockey is very dull, you would place a checkmark in the space nearest the word “dull.” If you are sort of undecided, you would place a checkmark in the middle space between the two words. Now rate each of the activities that follow [only one is listed]: Working with other students in small groups friendly :_____:_____:_____:_____:_____:_____:_____:_____: unfriendly happy :_____:_____:_____:_____:_____:_____:_____:_____: sad easy :_____:_____:_____:_____:_____:_____:_____:_____: hard fun :_____:_____:_____:_____:_____:_____:_____:_____: work hot :_____:_____:_____:_____:_____:_____:_____:_____: cold good :_____:_____:_____:_____:_____:_____:_____:_____: bad laugh :_____:_____:_____:_____:_____:_____:_____:_____: cry beautiful :_____:_____:_____:_____:_____:_____:_____:_____: ugly
38
Measurement Instruments
Issues for cognitive, aptitude, or affective tests Bias – distortions of a respondent’s performance or responses based on ethnicity, race, gender, language, etc. Responses to affective test items Socially acceptable responses Accuracy of responses Response sets Problems inherent in the use of self-report measures and the use of projective tests
39
48 Criterion-Referenced vs. Norm-Referenced Evaluation Instruments
Page 158 48 Criterion-referenced: A student . . . spelled every word in the weekly spelling list correctly. solved at least 75 percent of the assigned problems. achieved a score of at least 80 out of 100 on the final exam. did at least 25 push-ups within a five-minute period. read a minimum of one nonfiction book a week. Norm-referenced: A student . . . scored at the 50th percentile in his group. scored above 90 percent of all the students in the class. received a higher grade point average in English literature than any other student in the school. ran faster than all but one other student on the team. and one other in the class were the only ones to receive A’s on the midterm.
40
Selection of a Test Designing you own tests
Get help from others with experience developing tests Item writing guidelines Avoid ambiguous and confusing wording and sentence structure Use appropriate vocabulary Write items that have only one correct answer Give information about the nature of the desired answer Do not provide clues to the correct answer See Writing Multiple Choice Items
41
Selection of a Test Test administration guidelines Plan ahead
Be certain that there is consistency across testing sessions Be familiar with any and all procedures necessary to administer a test
42
Identify Data Options: Select Scales of Measurement
Nominal (Categorical): categories that describe traits or characteristics participants can check Ordinal: participants rank order a characteristic, trait or attribute
43
Identify Data Options: Select Scales of Measurement
Interval: provides “continuous” response possibilities to questions with assumed equal distance Ratio: a scale with a true zero and equal distances among units
44
Record and Administer Data Collection: Locate or Develop an Instrument
Develop your own instrument Locate an existing instrument Modify an existing instrument
45
Record and Administer Data Collection: Obtain Reliable and Valid Data
Validity: the ability to draw meaningful and justifiable inferences from the scores about a sample or a population Types of validity Content (representative of all possible questions that could be asked) Criterion-referenced (scores are a predictor of an outcome or criterion they are expected to predict Construct (determination of the significance, meaning, purpose and use of the scores)
46
Record and Administer Data Collection: Develop Administrate Procedures for Data Collection
Develop standard written procedures for administering an instrument Train researchers to collect observational data Obtain permission to collect and use public documents Respect individuals and sites during data gathering
47
Illustration of Types of Evidence of Validity
Figure 8.1 52
48
Reliability and Validity
Figure 8.2 53
49
Methods of Checking Validity and Reliability
Table 8.2, page 180 54
50
55 More About Research: Threats to Internal Validity in Everyday Life
Box 9A, page 199 55 Consider the following commonly held beliefs: Because “failure” often precedes “suicide,” it is therefore the cause of “suicide.” (probable history and mortality threat) Boys are genetically more talented in mathematics than are girls. (probable subject attitude and location threats) Girls are genetically more talented in language than are boys. (probable location and subject attitude threats) Minority students are less academically able than students from the dominant culture. (probable subject characteristics, subject attitude, location, and instrumentation threats) People on welfare are lazy. (probable subject characteristics, location, and history threats) Schooling makes students rebellious. (probable maturation and history threats) A policy of temporarily expelling students who don’t “behave” improves a school’s test scores. (probable mortality threat) Indoctrination changes attitude. (probable testing threat) So-called miracle drugs cure intellectual retardation. (probable regression threat) Smoking marijuana leads eventually to using cocaine and heroin. (probable mortality threat)
51
56 Illustration of Threats to Internal Validity Figure 9.2
Note: We are not implying that any of these statements are necessarily true; our guess is that some are and some are not. *This seems unlikely. †If these teacher characteristics are a result of the type of school, then they do not constitute a threat.
52
57 General Techniques for Controlling Threats to Internal Validity
Table 9.1, page 202 57
53
Technical Issues Validity (continued)
Consequential – to what extent are the consequences that occur from the test harmful Estimated by empirical and expert judgment Factors affecting validity Unclear test directions Confusing and ambiguous test items Vocabulary that is too difficult for test takers
54
Technical Issues Validity (continued) Factors affecting validity
Overly difficult and complex sentence structure Inconsistent and subjective scoring Untaught items Failure to follow standardized administration procedures Cheating by the participants or someone teaching to the test items
55
Technical Issues Validity – extent to which interpretations made from a test score are appropriate Characteristics The most important technical characteristic Situation specific Does not refer to the instrument but to the interpretations of scores on the instrument Best thought of in terms of degree
56
Technical Issues Validity (continued) Four types
Content – to what extent does the test measure what it is supposed to measure Item validity Sampling validity Determined by expert judgment
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.