Download presentation
Presentation is loading. Please wait.
Published byHayley Coppersmith Modified over 10 years ago
1
Experimental and Quasiexperimental Designs Chapter 10 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.
2
2 Learning Outcomes List the criteria necessary for inferring cause-and-effect relationships. Distinguish the differences between experimental and quasiexperimental designs. Define internal validity problems associated with experimental and quasiexperimental designs.
3
3 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Learning Outcomes (contd) Describe the use of experimental and quasiexperimental designs for evaluation research. Critically evaluate the findings of selected studies that test cause-and-effect relationships. Apply levels of evidence to experimental and quasiexperimental designs.
4
4 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. What is an Experiment? An experiment is a mode of observation that enables researchers to probe causal relationships. Many experiments in social research are conducted under the controlled conditions of a laboratory, but experimenters cal also take advantage of natural occurences to study the effects of events in the social world
5
5 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Purpose of Research Designs To provide the plan for testing the hypothesis about the independent and dependent variables Experimental and quasiexperimental designs differ from nonexperimental ones since the researcher actively seeks to bring about the desired effect and does not passively observe behaviours and actions
6
6 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Key Elements in Experimental Designs A. Dependent variable – the effect in a cause- effect relationship B. Independent variable – the variable the researcher manipulates to determine whether and how it will change the dependent variable the cause in a cause-effect relationship An experiment examines the effect of an Independent variable on a dependent variable.
7
7 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Cause and effect criteria 1. Causal and effect variable must be associated (correlated) 2. The cause must precede the effect in time. 3. The relationship must not be explained by another (spurious) variable.
8
8 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Research Design Use a design that: Is appropriate to the research question Maximizes control Holds the conditions of the study constant Establishes specific sampling criteria Maximizes the level of evidence
9
9 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Maximizing Control Rule out extraneous variables... Homogeneous sampling Constancy in data collection Manipulation of the independent variable Randomization
10
10 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Experimental Design Features Randomization of subjects to control or treatment group Control: independent variable dependent variable Manipulation of independent variable
11
11 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Experimental Design Advantages Disadvantages Most appropriate for testing cause-and-effect relationships Provide highest level of evidence for single studies Not all research questions are amenable to experimental manipulation or randomization Subject mortality... especially control group subjects Difficult logistics in field settings Hawthorne effect (next slide)
12
12 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Hawthorne Effect Refers to any variability in the dependent variable that is not the direct result of variations in the treatment variable Hypothesis: worker productivity would increase as lighting intensity was increased When lighting increased, productivity increased HOWEVER, when lighting was later decreased, productivity did not decrease. WHY? Interpretation: something other than treatment variable influenced workers – perhaps they worked faster because they knew were being observed
13
13 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Quasiexperimental Design TypesLevel III Evidence (p.220) Nonequivalent control group design After-only nonequivalent control group design One-group (pretest–post-test) design Time series design Examples to follow
14
14 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. The Classical Experiment (non-equivalent control group design) (p. 220 A) Experimental group Control group Measure dep var Remeasure dep var Admin stimulus Measure dep var Remeasure dep var
15
15 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Exposed/Comparison Group (after only nonequivalent control group design) (p 220. b) Measures are taken at only one point in time. Problem: groups may not have been similar initially. The result may, or may not, be due to the treatment variable. Also: exp.gp exp. Treatment post-test cont. gp ---------------------- post-test
16
16 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. One Group: Pretest/Post-Test design p. 220 C. Exp gp pretest exp. Treatment post-test
17
17 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Time series design (Within-Subject Design) Subjects are exposed to the various treatments Subjects own scores when exposed to different treatments are compared Importance of having a baseline measure and returning to the original condition The within-subject ABBA design: A – measure dependent variable under original condition B – measure dependent variable under treatment condition B – continue treatment condition and measure dependent variable A – measure dependent variable after returning to original condition Exp. Gp pretest pretest exp. Treatment post test post test
18
18 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd.
19
19 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Quasiexperimental Design Advantages and Disadvantages Practical and more feasible, especially in clinical settings Some generalizability Difficult to make clear cause-and-effect statements May not be able to randomize
20
20 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Evaluation Research Uses both experimental and quasiexperimental designs Seeks to determine the outcome of a program Can be formative or summative
21
21 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Examples What is the effect of coping skills training for youth diabetes on intensive insulin therapy? (Grey et al., 1999) What is the differential effect of phase- specific standardized education and telephone counselling on the physical, emotional, and social adjustment of women with breast cancer and their partners? (Hoskins et al., 2001) What is the effect of an Early Intervention Program (EIP) for adolescent mothers on infant health and maternal outcomes? (Koniak-Griffin et al., 2003)
22
22 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Critical Thinking Decision Path: Experimental and Quasiexperimental Design
23
23 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. General Critiquing Criteria What design is used? Is the design experimental or quasiexperimental? Is the problem one of a cause-and-effect relationship? Is the method used appropriate for the problem? Is the design suited to the study setting?
24
24 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Experimental Critiquing Criteria What experimental design is used? Is it appropriate? How are randomization, control, and manipulation applied? Are there reasons to believe that alternative explanations exist for the findings? Are all threats to validity including mortality addressed in the report?
25
25 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Quasiexperimental Critiquing Criteria What quasiexperimental design is used? Is it appropriate? What are the most common threats to the validity of the findings? What are the plausible alternative explanations for the findings? Are they addressed? Does the author address threats to validity acceptably? Are limitations addressed?
26
26 Copyright © 2009 Elsevier Canada, a division of Reed Elsevier Canada, Ltd. Evaluation Research Critiquing Criteria Is the specific problem, practice, policy, or treatment being evaluated identified? Are the outcomes to be evaluated identified? Is the problem analyzed and described? Is the program involved described and standardized? Are the measurements of change identified? Are the observed outcomes related to the activity or to other causes?
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.