Download presentation
Presentation is loading. Please wait.
1
Issues in Evaluating Educational Research
2
Evaluation of Research Validity
Validity: Can the findings and conclusion be trusted? 4 steps to analyzing a research study: What is the research question(s)? Does the research design match the research question(s)? How was the study conducted? Are there rival explanations for the results?
3
Evaluation of Research Validity
Step1: Find the research question(s) REVIEW: What are descriptive questions? What are experimental questions?
4
Evaluation of Research Validity
Step 2 Confirm: Does the design match the question(s)? 2 Key features Assignment of study participants Manipulation of an independent variable
5
Example 1 Question: Does increasing the amount of professional development teachers receive increase student achievement? What type of question is this: Causal or descriptive? What type of research design is needed: Experimental or descriptive? If descriptive, what type? Who are the participants? What are the independent and dependent variables? Pg 43
6
Example 1 continued Study participants are randomly assigned to two or more comparison groups that receive different amounts of professional development. Is this a true experimental or quasi-experimental design? Why? Study participants are assigned to professional development groups based on some kind of characteristic such as grade level or years of teaching experience. Is this a true experimental or quasi-experimental design? Pg 43
7
Example 2 Question: Does teacher professional development have a positive association with student achievement? What type of question is this: Causal or descriptive? What type of research design is needed: Experimental or descriptive? If descriptive, what type? Who are the participants? What are the independent and dependent variables? Pg 43
8
Evaluation of Research Validity
Step 3: How was treatment conducted There should be enough information so that the study can be replicated. Without this information it can be difficult to judge validity. Four key areas: Participants Treatment Data collection Data Analysis
9
Participants Report should describe Number of participants
Characteristics of participants – not only persons but schools and districts as well Look for characteristics that might influence results Student characteristics (ex: grade level, gender, socioeconomic status etc.) Teacher (classroom) characteristics (ex: experience, grade level, class size etc.) School characteristics (ex: # of students, teachers, or paras, location, grade levels etc.) District characteristics (ex: number of schools, number of students, location etc.)
10
Participants continued
Participant Selection How were they selected Often nonrandom in education research When nonrandom, conclusions can only be made about the sample of participants in the study Without random assignment, selection bias is a concern Ex: if a researcher selects teachers to participate in one of two types of prof. development based on school location, the results could be influenced by the schools characteristics rather than the actual prof. development.
11
Treatment Treatment – program, policy, practice being studied
In experimental research this is the independent variable Operational definitions must be provided Ex: professional development is a class in literacy instruction that teachers attend after school, 2 times per week for 2 hours. Construct validity – treatment is defined in a way that is a valid example or representation of the construct being studied. Ex: a definition of professional development that included going out to lunch 2 times per week would not have construct validity.
12
Treatment continued In addition to a valid definition, treatments must be implemented consistently Treatment fidelity should be reported Was the treatment carried out as planned? Were there any events that may have influenced the results during the treatment (ex: a conference on reading literacy during the reading literacy professional development study).
13
Data Collection 2 key factors in data collection validity
The data collection instrument The data collection procedures Are the data collection instruments valid and reliable? Validity: it measures what it is supposed to measure Reliability: repeated measures in a short time produce similar results Use of data collection instruments Ex: a valid algebra test may not be a valid measure of a persons ability to teach algebra
14
Data Collection continued
Data collection procedures How and when data are collected Procedures used to collect data can influence results Procedures must be carefully designed and described in the report Ex: giving survey responders anonymity (or not) may affect how they respond. Day and time of data collection Data collected the day before spring break may not produce valid results. In experimental research comparison and control groups must use the same data collection procedures
15
Data Analysis First determine if data collected is quantitative or qualitative Review: What is quantitative data? Qualitative? Quantitative Data Analysis Are statistics used to analyze the quantitative data? Statistics are analyzed and discussed in the research Qualitative Data Analysis Information is organized into categories Categories are coded Codes and coding procedures are explained
16
Evaluation of Research Validity
Step 4: Detect rival explanations for the results (Threats to validity) Conclusions are presented at the end of the report When judging the conclusion think about possible rival explanations for the results. It is the job of the researcher to rule out possible rival explanations and how or why they do not apply to the study.
17
Rival explanations continued
Quantitative research Especially important to rule out rival explanations when a treatment works Several common factors in quantitative studies of the effectiveness of an intervention in experimental research Selection bias – how subjects are assigned to treatment or control groups. Sample attrition – when more subjects leave the treatment group than the control group or vice versa Treatment diffusion – when participants in different control groups operate in the same environment (p.49) History effects – changes that occur in long research studies Practice effects – repeated measures of the same individuals(pre and post tests) Regression towards the mean – extremely high or extremely low scores on pre test
18
Rival Explanations continued
Qualitative research Main validity concern is credibility of results Use of verification methods –triangulation (136) Researcher and Participant Effects Researcher bias – researchers expectations influence the study. Blind data collectors Participant reactivity – participants react a certain way based on research context. (51). Design questions a certain way
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.