Week 2 Outline Me Barbara Maddie

Slides:



Advertisements
Similar presentations
Introduction to Psychology
Advertisements

Agenda Group Hypotheses Validity of Inferences from Research Inferences and Errors Types of Validity Threats to Validity.
Cross Cultural Research
Experimental designs Non-experimental pre-experimental quasi-experimental experimental No time order time order variable time order variables time order.
Basic Experimental Design
Experimental Design making causal inferences. Causal and Effect The IV precedes the DV in time The IV precedes the DV in time The IV and DV are correlated.
FUNDAMENTAL RESEARCH ISSUES © 2012 The McGraw-Hill Companies, Inc.
EVAL 6970: Experimental and Quasi- Experimental Designs Dr. Chris L. S. Coryn Kristin A. Hobson Fall 2013.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Correlation AND EXPERIMENTAL DESIGN
EXPERIMENTS AND OBSERVATIONAL STUDIES Chance Hofmann and Nick Quigley
EVAL 6970: Experimental and Quasi-Experimental Designs
Experimental Research
Association vs. Causation
Chapter 5 Research Methods in the Study of Abnormal Behavior Ch 5.
Experimental Research
Chapter 3 The Research Design. Research Design A research design is a plan of action for executing a research project, specifying The theory to be tested.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
EVAL 6970: Cost Analysis for Evaluation Dr. Chris L. S. Coryn Nick Saxton Fall 2014.
Understanding Statistics
Chapter 8 Experimental Design: Dependent Groups and Mixed Groups Designs.
Single-Factor Experimental Designs
 Is there a comparison? ◦ Are the groups really comparable?  Are the differences being reported real? ◦ Are they worth reporting? ◦ How much confidence.
Experimental Design making causal inferences Richard Lambert, Ph.D.
Techniques of research control: -Extraneous variables (confounding) are: The variables which could have an unwanted effect on the dependent variable under.
Experimental Design All experiments have independent variables, dependent variables, and experimental units. Independent variable. An independent.
CAUSAL INFERENCE Presented by: Dan Dowhower Alysia Cohen H 615 Friday, October 4, 2013.
STUDYING BEHAVIOR © 2009 The McGraw-Hill Companies, Inc.
LECTURE 3 RESEARCH METHODOLOGY Research framework and Hypotheses development.
H615 Fall 2013 – Week 1 Introduction Cause and Effect –Dan and Alysia (slides 1-7) Confounding, mediation, moderation (2-12) History of research design.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
Public Finance and Public Policy Jonathan Gruber Third Edition Copyright © 2010 Worth Publishers 1 of 24 Copyright © 2010 Worth Publishers.
How Psychologists Do Research Chapter 2. How Psychologists Do Research What makes psychological research scientific? Research Methods Descriptive studies.
Copyright ©2011 Brooks/Cole, Cengage Learning Gathering Useful Data for Examining Relationships Observation VS Experiment Chapter 6 1.
Some Terminology experiment vs. correlational study IV vs. DV descriptive vs. inferential statistics sample vs. population statistic vs. parameter H 0.
Methods of Presenting and Interpreting Information Class 9.
Chapter 6 Selecting a Design. Research Design The overall approach to the study that details all the major components describing how the research will.
Looking for statistical twins
Chapter 2 Research Methods.
Issues in Evaluating Educational Research
Approaches to social research Lerum
Personality Assessment, Measurement, and Research Design
Overview of probability and statistics
Chapter 4 Research Methods in Clinical Psychology
Design (3): quasi-experimental and non-experimental designs
Research Designs, Threats to Validity and the Hierarchy of Evidence and Appraisal of Limitations (HEAL) Grading System.
Statistical Analyses & Threats to Validity
Research methods Lesson 2.
Inferential statistics,
2 independent Groups Graziano & Raulin (1997).
Making Causal Inferences and Ruling out Rival Explanations
Introduction to Design
Experiments and Observational Studies
Single-Case Designs.
Empirical Tools of Public Finance
Chapter 7 – Correlation & Differential (Quasi)
Experiment Basics: Variables
Introduction to Experimental Design
Personality Assessment, Measurement, and Research Design
POSC 202A: Lecture 1 Introductions Syllabus R
Experimental Design: The Basic Building Blocks
Descriptive Studies; Causality and Causal Inference
Non-Experimental designs: Correlational & Quasi-experimental designs
Research Methods for the Behavioral Sciences
Positive analysis in public finance
Research Methods & Statistics
Reminder for next week CUELT Conference.
Misc Internal Validity Scenarios External Validity Construct Validity
Chapter 3 Hernán & Robins Observational Studies
Chapter Ten: Designing, Conducting, Analyzing, and Interpreting Experiments with Two Groups The Psychologist as Detective, 4e by Smith/Davis.
Presentation transcript:

Week 2 Outline Me Barbara Maddie Mini-lecture on cause & effect and implications for research design (Shadish, Cook & Campbell, 2002) Barbara P&R Chapter 5 Maddie Publication bias (TEXT Chapter 30) Confounding and MA (Valentine & Thompson, 2013) Another tool for quality assessment (Zingg et al., 2016) Gugiu & Gugiu, 2010 Higgins et al., 2011 Threats to validity More from Gugiu (Gugiu et al., 2012; Gugiu, 2015)

From H615 Fall 2013 Advanced Research Design Appraisal of the quality of research designs requires an understanding of cause and effect and their place/role in research design Cause and Effect Confounding, mediation, moderation Basics of Research Design Review and further thoughts on experimentation

CAUSE Locke: “That which makes an effect begin to be” Most effects have multiple causes INUS conditions “An insufficient but non-redundant part of an unnecessary but sufficient condition” Probabilistic cause Most causes are not unique, but only increase the probability of a specific effect Most causes are context dependent – differ across contexts

EFFECT Locke: “That which had its beginning from some other thing (cause)” The difference between what happened as the result of a cause and what would have happened without that cause Counterfactual: What would have happened without the treatment Cannot be observed

Counterfactual Reasoning Is fundamentally qualitative Rubin, Holland et al., have statistical approaches Central task for all cause-probing research (experimentation): Create approximations to the counterfactual (that it is impossible to actually observe) Two central tasks: Create a high-quality, though imperfect source of counterfactual inference Understand how this source differs from the treatment condition

Conditions for Inferring Cause Due to John Stuart Mill Temporal: the cause preceded the effect Relationship: the cause was related to the effect Plausibility: there is no other plausible alternative explanation for the effect other than the cause In experiments, we: Manipulate the cause and observe the subsequent effect See if the cause and effect are related Use various methods during and after the experiment to rule out plausible alternative explanations

Causation, Correlation, Confounds Correlation does not prove causation!!! Because of … Confounding variables Third variables (in addition to the presumed cause and effect) that could be the real cause of both the presumed cause and effect

Manipulation & Causal Description Experiments explore the effects of things that can be manipulated Experiments identify the effects (DV) of a (manipulated) cause (IV) Causal description, not explanation: Knowledge of the effects tells us nothing about the mechanisms or causal processes – only that they occur Descriptive causation is usually between molar causes and molar outcomes Molar = package that contains many components

Mediation and Moderation Many causal explanations consist of causal chains Mediator variables and mediation Some experiments vary the conditions under which a treatment is provided Moderator variables and moderation Some interventions work better for some subgroups (e.g., age, race, SES) than others Moderation

What is Experimental Design? The creation of a counterfactual Creation/identification of comparison/control group/condition Method for assigning units to treatment and counterfactual Includes both Strategies for organizing data collection, and Data analysis procedures matched to those data collection strategies Classical treatments of design stress analysis procedures based on the analysis of variance (ANOVA) Other analysis procedures such as those based on hierarchical linear models or analysis of aggregates (e.g., class or school means) are also appropriate

Why Do We Need Experimental Design? Because of variability We would not need a science of experimental design if: all units (students, teachers, & schools) were identical and all units responded identically to treatments We need experimental design to control variability so that treatment effects can be identified

Principles of Experimental Design Experimental design controls background variability so that systematic effects of treatments can be observed Three basic principles Control by matching Control by randomization Control by statistical adjustment Their importance is in that order. Why? What are some examples of matching? What’s the best example of matching? Twin studies

1. Control by Matching Known sources of variation may be eliminated by matching Eliminating genetic variation Compare animals from the same litter of mice Twin studies Eliminating district or school effects Compare students within districts or schools However matching is limited matching is only possible on observable characteristics perfect matching is not always possible matching inherently limits generalizability by removing variation (possibly desired)

Control by Matching Matching ensures that groups compared are alike on specific known and observable characteristics (in principle, everything we have thought of) Wouldn’t it be great if there were a method of making groups alike on not only everything we have thought of, but everything we didn’t think of too? There is such a method

2. Control by Randomization Matching controls for the effects of variation due to specific observable characteristics Randomization controls for the effects of all (observable or non-observable, known or unknown) characteristics Randomization makes groups equivalent (on average) on all variables (known and unknown, observable or not) Randomization also gives us a way to assess whether differences after treatment are larger than would be expected due to chance.

The Randomized Experiment – The “Gold Standard” Random assignment: Creates two or more groups of units (people, clinics, hospitals, classrooms, schools, places) probabilistically similar to each other on average If done properly When Ns are large enough A Randomized Experiment Yields: An estimate of the size of a treatment effect the “population average causal effect” That has desirable statistical properties An estimate of the probability that the true effect falls within a defined confidence interval

Quasi-Experiments The cause is manipulated The cause occurs before the effect However, there is less compelling support for counterfactual inferences That is, there are more plausible alternative explanations Take a “falsification” approach That is, requires researchers to identify and falsify plausible alternatives that might falsify a causal claim

Different kinds of Qes: ECTs and NECTs Gugiu uses this terminology Equivalent controlled trials (ECTs) E.g., matched groups Non-equivalent controlled trials (NECTs) E.g., self-selected groups Uncontrolled trials (UTs) No control or comparison group of any kind

Fallible falsification Disconfirmation is not always complete May lead to modification of aspects of a theory Requires measures that are perfectly valid reflections of the theory being tested this is difficult to achieve Observations are more fact-like when they: Are repeated across multiple conceptions of the construct Are repeated across multiple measures Are repeated at multiple times Plausible alternatives depend on social consensus, shared experience and empirical data

Generalization This is the major limitation of RCTs Most experiments are localized and particularistic Causal generalization (Cronbach) would require random sampling of: Units (people or places) Treatments (Tx) Observations of the units with and without Tx Settings in which the treatments are provided and observations made This is never achieved Note difference between a random sampling of units (people) and random assignment to condition

3. Control by Statistical Adjustment Control by statistical adjustment is a form of pseudo-matching It uses statistical relations to simulate matching Statistical control is important for increasing precision but should not be relied upon to control biases that may exist prior to assignment

Rubin’s Causal Model (RCM) The causal effect is the difference between what would have happened to the participant in the treatment condition and what would have happened to the same participant if he or she had instead been in the control condition The counterfactual account of causality? No - Rubin does not characterize RCM as a counterfactual account. He prefers the “potential outcomes” conceptualization – a formal statistical model rather than a philosophy of causation Of course, this is impossible to observe, so all manipulations and statistical adjustments are designed to approximate it Unobserved values are assumed to be missing (completely at random in RCTs)

Campbell and Ruben both like: Causal description rather than explanation Discovering the effects of known causes rather than the unknown causes of known effects (epidemiology) Central role of manipulable causes Use of strong design, especially RCTs Matching RCM added propensity scores Applying extra effort to address causal inference in non-randomized experiments and observational studies