Download presentation
1
Day 6: Non-Experimental & Experimental Design
Where are the beakers??
2
What kind of research is considered the “gold standard” by the Institute of Education Sciences?
Descriptive Causal-Comparative Correlational Experimental Why?
3
Why does most educational research use non-experimental designs?
4
What is the purpose of non-experimental designs?
5
Causal-Comparative Example
Green & Jaquess (1987) Interested in the effect of high school students’ part-time employment on their academic achievement. Sample: 477 high school juniors who were unemployed or employed > 10 hours/wk.
6
Causal-Comparative Design
A study in which the researcher attempts to determine the cause, or reason, for pre-existing differences in groups of individuals At least two different groups are compared on a dependent variable or measure of performance (called the “effect”) because the independent variable (called the “cause”) has already occurred or cannot be manipulated
7
Causal-Comparative Design
Ex-post facto Causes studied after they have exerted their effect on another variable.
8
Causal-Comparative Design
Drawbacks Difficult to establish causality based on collected data. Unmeasured variables (confounding variables) are always a source of potential alternative causal explanations.
9
Some Thought Questions…
10
Correlational Design Determines whether and to what degree a relationship exists between two or more quantifiable variables.
11
Example of Correlation
12
Correlational Design The degree of the relationship is expressed as a coefficient of correlation Examples Relationship between math achievement and math attitude Relationship between degree of a school’s racial diversity and student use of stereotypical language Your topics?
13
Correlation coefficient…
-1.00 0.00 +1.00 strong positive strong negative no relationship
14
Advantages of Correlational Design
Analysis of relationships among a large number of variables in a single study Information about the degree of the relationship between the variables being studied
15
Cautions A relationship between two variables does not mean one causes the other (Think about the reading achievement and body weight correlations) Possibility of low reliability of the instruments makes it difficult to identify relationships
16
Cautions Lack of variability in scores (e.g. everyone scoring very, very low; everyone scoring very, very high; etc.) makes it difficult to identify relationships Large sample sizes and/or using many variables can identify significant relationships for statistical reasons and not because the relationships really exist (Avoid shotgun approach)
17
Cautions Need to identify your sample to know what is actually being compared. If using predictor variables, time interval between collecting the predictor and criterion variable data is important.
18
Correlational Designs
Guidelines for interpreting the size of correlation coefficients Much larger correlations are needed for predictions with individuals than with groups Crude group predictions can be made with correlations as low as .40 to .60 Predictions for individuals require correlations above .75
19
Correlational Designs
Guidelines for interpreting the size of correlation coefficients Exploratory studies Correlations of .25 to .40 indicate the need for further research Much higher correlations are needed to confirm or test hypotheses
20
Correlational Designs
Criteria for evaluating correlational studies Causation should not be inferred from correlational studies Practical significance should not be confused with statistical significance The size of the correlation should be sufficient for the use of the results (individuals vs groups)
21
Think… If you were going to take your action research topic, and create a causal-comparative study, what would it look like? --OR-- If you were going to take your action research project, and create a correlational study, what would it look like?
22
Experimental Design The Gold Standard?
23
To Review Why is most educational research comprised of non-experimental research designs?
24
To Review What is the purpose of non-experimental research?
25
To Review How does the independent variable function in non-experimental research?
26
To Review Can non-experimental research claim causality?
27
An example Read the example given in class and in pairs respond to the questions
28
Experimental Research
Purpose To make causal inferences about the relationship between the independent and dependent variables Characteristics Direct manipulation of the independent variable Control of extraneous variables
29
Experimental Designs Examples Single Group Post-test
Single Group Pre-test Post-test Non-Equivalent Groups Post-test Quasi-Experimental Design Randomized Post-test only Randomized Pre-test Post-test Factorial Examples
30
Experimental Validity
Internal validity The extent to which the independent variable, and not other extraneous variables , produced the observed effect on the dependent variable External validity The extent to which the results are generalizable
31
Internal Validity Threats that reduce the level of confidence in any causal conclusions Key Question: Is this a plausible threat to the internal validity of the study?
32
Threats to Internal Validity
History Extraneous events have an effect on the subjects’ performance on the dependent variable Ex - The crash of the stock market, 9-11, the invasion of Iraq, etc. Selection Groups that are initially not equal due to differences in the subjects in those groups Ex - Positive and negative attitudes, high and low achievers, etc.
33
Threats to Internal Validity
Maturation Changes experienced within the subject over time Pretesting The effect of having taken a pretest Instrumentation Poor technical quality (i.e. validity, reliability) or changes in instrumentation
34
Threats to Internal Validity
Subject attrition Differential loss of subjects from groups Statistical regression The natural movement of extreme scores toward the mean Diffusion of treatment The treatment is given to the control group Experimenter effects Different characteristics or expectations of those implementing the treatments across groups
35
Threats to Internal Validity
Subject effects The effects of being aware that one is involved in a study Four types Hawthorne effect John Henry effect Resentful demoralization Novelty effect
36
Internal Validity Key Point: Ultimately, validity is a matter of judgment. Ask if it is reasonable that possible threats are likely to affect the results.
37
External Validity The extent to which results can be generalized from a sample to a particular population. Question – Why would really good internal validity often result in poor external validity?
38
External Validity Factors affecting external validity Subjects
Representativeness of the sample in comparison to the population Personal characteristics of the subjects Situations - characteristics of the setting Specific environment Special situation Particular school
39
External Validity Importance of explanation of sampling procedures
40
Experimental Designs Examples Single Group Post-test
Single Group Pre-test Post-test – Libby, Deb Non-Equivalent Groups Post-test – Mary, Cheryl Quasi-Experimental Design – Pete, Laura Randomized Post-test only – Amanda, Nicole, Tam Randomized Pre-test Post-test – Karen, Jen, Justin Examples
41
Your Task Based on the topic of your proposal, design an experimental study using the design you were assigned. Write a research question and hypothesis. Sketch out the methods. Identify strengths and weaknesses of each design.
42
Experimental Designs Notation
R indicates random selection or random assignment O indicates an observation Test Observation score Scale score X indicates a treatment A, B, C, ... indicates a group
43
Pre-Experimental Designs
No pre-experimental design controls internal validity threats well Single group pretest only A X O Internal validity threats History, maturation, attrition, experimenter effects, subject effects, and instrumentation are viable threats Useful only when the research is sure of the status of the knowledge, skill, or attitude being changed and there are no extraneous variables affecting the results
44
Pre-Experimental Designs
Single group pretest post-test A O X O Internal validity threats Maturation and pretesting are threats History and instrumentation are potential threats Useful when subject effects will not influence the results, history effects can be minimized, and multiple pretests and post-tests are used
45
Pre-Experimental Designs
Non-equivalent groups post-test only A X O B O Internal validity threats Definite Threat: Selection Potential Threats: History, maturation, and instrumentation Useful when groups are comparable and subjects can be assumed to be about the same at the beginning of the study
46
Quasi-Experimental Designs
Types Non-equivalent pretest/post-test, experimental control groups A O X O B O O Non-equivalent pretest/post-test, multiple treatment groups A O X1 O B O X2 O Useful when subjects are in pre-existing groups (e.g. classes, schools, teams, etc.)
47
Quasi-Experimental Designs
Threats to internal validity Selection is the major concern Likely to control for most other threats, provided the groups are not significantly different from one another See Table 9.2 for specific threats related to each design
48
True Experimental Designs
Important terminology Random assignment Subjects placed into groups by random Ensures equivalency of the groups Random selection of subjects Subjects chosen from population by random Ensures generalizability to the population from which the subjects were selected (i.e. external validity)
49
True Experimental Designs
Types Randomized post-test only experimental control groups R A X O R B O Randomized post-test only multiple treatment groups R A X1 O R B X2 O
50
True Experimental Designs
Types (continued) Randomized pretest/post-test multiple treatment groups R A O X1 O R B O X2 O Randomized pretest/post-test experimental control groups R A O X O R B O O
51
True Experimental Designs
Threats to internal validity Controls for selection, maturation, and statistical regression Likely to control for most other threats See Table 9.2 for specific threats related to each design
52
Evaluating Experimental Designs
Criteria for evaluating experimental research The primary purpose is to test causal hypotheses There should be direct manipulation of the independent variable There should be clear identification of the specific research design
53
Evaluating Experimental Designs
Criteria for evaluating experimental research The design should provide maximum control of extraneous variables Treatments are substantively different from one another The number of subjects is dependent on or equal to the number of treatment replications
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.