Presentation is loading. Please wait.

Presentation is loading. Please wait.

EDRS6208 Lecture Three Instruments and Instrumentation Data Collection.

Similar presentations


Presentation on theme: "EDRS6208 Lecture Three Instruments and Instrumentation Data Collection."— Presentation transcript:

1 EDRS6208 Lecture Three Instruments and Instrumentation Data Collection

2

3 Objectives At the end of the lecture students will be able to
Identify instruments used for quantitative data collection. Explain the standards used for judging the quality of data collection. Define Survey Research Explain the factors to consider when designing a survey research.

4 Data Collection The purpose of data collection is to learn something about other people or things. The focus is on the particular attribute or quality of the person or setting( Mertens, 1998, p. 286).

5 Planning data Collection
Identify the attributes of interest Decide how to collect data about these attributes. Consider the target population Establish quality of the data collected

6 Composition of Instrument
Constructs need to be defined ( underlying ability measured) Target population must be taken into consideration Validity of instrument Reliability of instrument

7 Standards for judging quality of data collection in quantitative research

8 Reliability the extent to which an instrument would yield consistent results when the characteristic being measured has not changed ( Leedy & Ormrod, 2005, p.93).

9 Forms of Reliability Reliability with observers Internal consistency
Interrater reliability Intrarater reliability Internal consistency Repeated measures Test-retest reliability Parallel forms

10 Reliability with observers
Interrater reliability: Data collected through observations, the researcher ascertains reliability between two independent observers or raters. Intrarater reliability: comparisons made between two data collection efforts by the same observer.

11 Internal Consistency Estimates reliability by grouping questions in a questionnaire that measure the same concept. Example: Write two sets of three questions that measure the same concept after collecting the responses run a correlation between those two groups of three questions to determine if the instrument is reliable.

12 Repeated Measures Reliability
Test-retest: An instrument is administered to a group of individuals twice. The second administration can occur immediately or after a time delay. Scores are compared. Parallel forms: similar to test-retest instead an equivalent form of the test is administered the second time.

13 Validity Validity the extent to which the instrument and or design measure what they actually intend to measure. Instrument Validity in measurement deals with the degree to which the concepts studied are accurately represented by items on the instrument. Research Design Validity the degree to which the research design adequately represents the concepts being studied.

14

15 Types of Validity Content Criterion Internal External Concurrent
predictive Internal External

16 Content Validity The extent to which the a measurement instrument is a representative sample of the content area being measured. Important in studies that purport to compare two or more different curricular, teaching strategies.

17 Criterion validity The extent to which the results of an assessment instrument correlate with another ( it is theory based) Predictive: the extent to which the instrument predicts the outcomes that are expected. Concurrent: the extent to which the scores in the instrument agree with scores on other factors that are expected to relate to them.

18 Internal validity Establishes whether there is a relationship between the predictors and response: Example: Did the attendance policy cause class participation to increase? Predictor: attendance policy Response: class participation

19 External Validity The ability to generalise the of the study to other settings. From one sample to another From one setting to another

20 Threats to internal validity
History Maturation Testing Instrumentation Mortality Regression

21 History This happens when a historical event affects the data collection process. In the earlier example, it would not mean that a stricter attendance policy that caused an increase in participation. If a number of students were expelled earlier because they did not participate in school activities this would impact on the results.

22 Maturation When changes in the participant over time impact on the outcome, for example, if the students in the sample “grew up” and realised the importance of participation in class. That may increase their participation and not the stricter attendance policy.

23 Testing The impact of pre-test on subsequent tests. For example, if you measured students participation prior to implementing the attendance policy and students were forewarned that there was an emphasis on participation, that may increase their participation. The outcome may be as a result of a testing threat but not the treatment.

24 Instrumentation The instrument is not precise and does not measure what it intends to measure.

25 Mortality Subjects drop out of the research and it leads to an inflated measure of the effect. For example, if as a result of a stricter attendance policy, most students dropout of a class, leaving only the more serious ones this would mean that the effect is overestimated.

26 Regression There is a tendency for the participants to score close to the mean from the pre-test to the post-test.

27 Other threats to internal validity Multiple group threats
Group differences Groups mature at different rates The treatment is communicated between groups Control group over performs the treatment group. The control group gets discouraged when the treatment is withheld.

28 Threats to External validity
Hawthorn effect: impact on the response when participants know they are subjects of an experiment. Novelty and disruptions effect: the experimenter contributes to the effectiveness of the instrument. Explicit description of the treatment: the researcher needs to explain the treatment in detail so others can replicate the study.

29 Multiple treatment influence: when using more than one treatment one treatment may influence the effectiveness of another due to sequencing. Pre-test sensitisation: the pre-test interacts with the experimental treatment and influences the response. Interaction of history and treatment effects: history may interact with the treatment effects which makes replication difficult.

30 Measurement of the response: how you measure the dependent variable has to be consistent with the definition. Interaction of time measurement and treatment: measuring the dependent variable at different times can produce different results.

31 Difference between Reliability and Validity
Reliability estimates the consistency of the measurement and validity involves the degree to which you are measuring what you are supposed to measure. Question: If an instrument does not accurately measure what it is supposed to measure, will can it measure consistency?

32 Some ways to improve validity
Ensure goals and objectives are clearly defined. Match assessment measures with goals and objectives. Have instruments reviewed using a variety of methods. Compare measures with other measures if possible. Control for confounding variables

33 Data Collection Instruments in Quantitative research
Archival Database Surveys/questionnaires Tests Attitudinal scales Interviews Observations

34 Survey Design

35 What is Survey Research?
A method of acquiring information about one or more groups of people – perhaps about their characteristics, opinions, attitudes, or previous experiences – by asking them questions and tabulating their answers ( Leedy and Ormrod, 2005, p. 147). Aim: to learn about a large population by surveying a sample of the population

36 Steps conducting a survey
Establish the goals of the research Determine the sample Create the questionnaire Pre-test the questionnaire Analyse the data

37 Goals of Research What do you want to learn?
This determines whom you will survey and what you will ask them. If the goals are unclear then the results will not be clear.

38 Select your sample What is your target population?
What is your sample size? How will the representative sample be drawn? How will you avoid biased sample? What is your sampling strategy?

39 Survey Type Establish the type of survey Telephone survey
Web-based surveys Personal interviews Mail surveys surveys

40 Factors to consider when choosing a survey method
Speed and web-based surveys are fastest methods, followed by telephone interviewing, mail surveys are the slowest. Cost Personal interviews are the most expensive followed by telephone then mail. and web-based are the least expensive for large samples,. Internet usage Webpage and surveys offer significant advantages, but you may not be able to generalise their results to the population

41 Literacy levels Illiterate and less educated people early respond to surveys. Sensitive questions People are more likely to answer sensitive questions when interviewed directly by a computer in one form or another. Video, sound, graphics A need to get reactions to video, music, or a picture limits your options. You can play a video on a web page, in a computer directed interview, or in person. You can play music when using these methods or over a phone. you can show pictures on a web page, computer directed interview, in person and in a mail survey.

42 Questionnaire Design KISS: keep it short and simple.
Start with an introduction or a welcome message. Allow a “don’t know” or “ not applicable” response to all questions except those in which you are certain that the respondents will have a clear answer. Include “other” or “none” whenever they are logically possible answers.

43 Question Types Multiple choice Numeric open end Text open end
Rating scales Agreement scales

44 Question and answer choice order
Early questions should be easy and pleasant to answer. Difficult and sensitive questions near the end of survey when possible. Whenever there is a logical or natural order to answer choices use it. ( agree – disagree in that order) Randomise the order of related questions.

45 Pilot the Questionnaire
The last step in questionnaire design is to test the questionnaire with a small sample before you conduct the main research. This will reveal unanticipated problems with question wording.

46 Class Discussion Do you think a more reliable test is automatically more valid? Think of examples of surveys that it would be best to conduct (a) by mail, (b) face to face (c) on the telephone. What are the advantages and disadvantages of each method in terms of efficiency.

47 Assignment two Due October 8, 2010 Page 198. Number 10.
Page 347. Numbers 1, 2, and 4.


Download ppt "EDRS6208 Lecture Three Instruments and Instrumentation Data Collection."

Similar presentations


Ads by Google