Download presentation
Presentation is loading. Please wait.
1
Week 2 Outline Me Barbara Maddie
Mini-lecture on cause & effect and implications for research design (Shadish, Cook & Campbell, 2002) Barbara P&R Chapter 5 Maddie Publication bias (TEXT Chapter 30) Confounding and MA (Valentine & Thompson, 2013) Another tool for quality assessment (Zingg et al., 2016) Gugiu & Gugiu, 2010 Higgins et al., 2011 Threats to validity More from Gugiu (Gugiu et al., 2012; Gugiu, 2015)
2
From H615 Fall 2013 Advanced Research Design
Appraisal of the quality of research designs requires an understanding of cause and effect and their place/role in research design Cause and Effect Confounding, mediation, moderation Basics of Research Design Review and further thoughts on experimentation
3
CAUSE Locke: “That which makes an effect begin to be”
Most effects have multiple causes INUS conditions “An insufficient but non-redundant part of an unnecessary but sufficient condition” Probabilistic cause Most causes are not unique, but only increase the probability of a specific effect Most causes are context dependent – differ across contexts
4
EFFECT Locke: “That which had its beginning from some other thing (cause)” The difference between what happened as the result of a cause and what would have happened without that cause Counterfactual: What would have happened without the treatment Cannot be observed
5
Counterfactual Reasoning
Is fundamentally qualitative Rubin, Holland et al., have statistical approaches Central task for all cause-probing research (experimentation): Create approximations to the counterfactual (that it is impossible to actually observe) Two central tasks: Create a high-quality, though imperfect source of counterfactual inference Understand how this source differs from the treatment condition
6
Conditions for Inferring Cause
Due to John Stuart Mill Temporal: the cause preceded the effect Relationship: the cause was related to the effect Plausibility: there is no other plausible alternative explanation for the effect other than the cause In experiments, we: Manipulate the cause and observe the subsequent effect See if the cause and effect are related Use various methods during and after the experiment to rule out plausible alternative explanations
7
Causation, Correlation, Confounds
Correlation does not prove causation!!! Because of … Confounding variables Third variables (in addition to the presumed cause and effect) that could be the real cause of both the presumed cause and effect
8
Manipulation & Causal Description
Experiments explore the effects of things that can be manipulated Experiments identify the effects (DV) of a (manipulated) cause (IV) Causal description, not explanation: Knowledge of the effects tells us nothing about the mechanisms or causal processes – only that they occur Descriptive causation is usually between molar causes and molar outcomes Molar = package that contains many components
9
Mediation and Moderation
Many causal explanations consist of causal chains Mediator variables and mediation Some experiments vary the conditions under which a treatment is provided Moderator variables and moderation Some interventions work better for some subgroups (e.g., age, race, SES) than others Moderation
10
What is Experimental Design?
The creation of a counterfactual Creation/identification of comparison/control group/condition Method for assigning units to treatment and counterfactual Includes both Strategies for organizing data collection, and Data analysis procedures matched to those data collection strategies Classical treatments of design stress analysis procedures based on the analysis of variance (ANOVA) Other analysis procedures such as those based on hierarchical linear models or analysis of aggregates (e.g., class or school means) are also appropriate
11
Why Do We Need Experimental Design?
Because of variability We would not need a science of experimental design if: all units (students, teachers, & schools) were identical and all units responded identically to treatments We need experimental design to control variability so that treatment effects can be identified
12
Principles of Experimental Design
Experimental design controls background variability so that systematic effects of treatments can be observed Three basic principles Control by matching Control by randomization Control by statistical adjustment Their importance is in that order. Why? What are some examples of matching? What’s the best example of matching? Twin studies
13
1. Control by Matching Known sources of variation may be eliminated by matching Eliminating genetic variation Compare animals from the same litter of mice Twin studies Eliminating district or school effects Compare students within districts or schools However matching is limited matching is only possible on observable characteristics perfect matching is not always possible matching inherently limits generalizability by removing variation (possibly desired)
14
Control by Matching Matching ensures that groups compared are alike on specific known and observable characteristics (in principle, everything we have thought of) Wouldn’t it be great if there were a method of making groups alike on not only everything we have thought of, but everything we didn’t think of too? There is such a method
15
2. Control by Randomization
Matching controls for the effects of variation due to specific observable characteristics Randomization controls for the effects of all (observable or non-observable, known or unknown) characteristics Randomization makes groups equivalent (on average) on all variables (known and unknown, observable or not) Randomization also gives us a way to assess whether differences after treatment are larger than would be expected due to chance.
16
The Randomized Experiment – The “Gold Standard”
Random assignment: Creates two or more groups of units (people, clinics, hospitals, classrooms, schools, places) probabilistically similar to each other on average If done properly When Ns are large enough A Randomized Experiment Yields: An estimate of the size of a treatment effect the “population average causal effect” That has desirable statistical properties An estimate of the probability that the true effect falls within a defined confidence interval
17
Quasi-Experiments The cause is manipulated
The cause occurs before the effect However, there is less compelling support for counterfactual inferences That is, there are more plausible alternative explanations Take a “falsification” approach That is, requires researchers to identify and falsify plausible alternatives that might falsify a causal claim
18
Different kinds of Qes: ECTs and NECTs
Gugiu uses this terminology Equivalent controlled trials (ECTs) E.g., matched groups Non-equivalent controlled trials (NECTs) E.g., self-selected groups Uncontrolled trials (UTs) No control or comparison group of any kind
19
Fallible falsification
Disconfirmation is not always complete May lead to modification of aspects of a theory Requires measures that are perfectly valid reflections of the theory being tested this is difficult to achieve Observations are more fact-like when they: Are repeated across multiple conceptions of the construct Are repeated across multiple measures Are repeated at multiple times Plausible alternatives depend on social consensus, shared experience and empirical data
20
Generalization This is the major limitation of RCTs
Most experiments are localized and particularistic Causal generalization (Cronbach) would require random sampling of: Units (people or places) Treatments (Tx) Observations of the units with and without Tx Settings in which the treatments are provided and observations made This is never achieved Note difference between a random sampling of units (people) and random assignment to condition
21
3. Control by Statistical Adjustment
Control by statistical adjustment is a form of pseudo-matching It uses statistical relations to simulate matching Statistical control is important for increasing precision but should not be relied upon to control biases that may exist prior to assignment
22
Rubin’s Causal Model (RCM)
The causal effect is the difference between what would have happened to the participant in the treatment condition and what would have happened to the same participant if he or she had instead been in the control condition The counterfactual account of causality? No - Rubin does not characterize RCM as a counterfactual account. He prefers the “potential outcomes” conceptualization – a formal statistical model rather than a philosophy of causation Of course, this is impossible to observe, so all manipulations and statistical adjustments are designed to approximate it Unobserved values are assumed to be missing (completely at random in RCTs)
23
Campbell and Ruben both like:
Causal description rather than explanation Discovering the effects of known causes rather than the unknown causes of known effects (epidemiology) Central role of manipulable causes Use of strong design, especially RCTs Matching RCM added propensity scores Applying extra effort to address causal inference in non-randomized experiments and observational studies
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.