Experimental Design.

Slides:



Advertisements
Similar presentations
PhD Research Seminar Series: Valid Research Designs
Advertisements

Ch 8: Experimental Design Ch 9: Conducting Experiments
Experimental Design and the struggle to control threats to validity.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Defining Characteristics
Experimental Design: Threats to Validity. EXPERIMENTS: The independent variable is manipulated to determine its effect on the dependent variable(s) whilst.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Correlation AND EXPERIMENTAL DESIGN
Thinking hats: What are the key assumptions of each approach? What are the benefits of each approach? What are the weaknesses of each approach? This is.
Experimental Design. What is an experiment? – When the researcher manipulates the independent variable to view change in the dependent variable Why do.
Validity, Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Lecture 10 Psyc 300A. Types of Experiments Between-Subjects (or Between- Participants) Design –Different subjects are assigned to each level of the IV.
Types of Group Designs ____________ group design. The experiment compares groups that receive or _______________ the IV (control group) e.g., behavior.
Sampling & Experimental Control Psych 231: Research Methods in Psychology.
Quasi-Experimental Designs Whenever it is not possible to establish cause-and-effect relations because there is not complete control over the variables.
Experiments Pierre-Auguste Renoir: Barges on the Seine, 1869.
Experiments and Observational Studies.  A study at a high school in California compared academic performance of music students with that of non-music.
Chapter 8 Experimental Research
I want to test a wound treatment or educational program but I have no funding or resources, How do I do it? Implementing & evaluating wound research conducted.
Experimental and Quasi-Experimental Designs
Quantitative Research Designs
Consumer Preference Test Level 1- “h” potato chip vs Level 2 - “g” potato chip 1. How would you rate chip “h” from 1 - 7? Don’t Delicious like.
Copyright © 2010 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Experiments and Observational Studies. Observational Studies In an observational study, researchers don’t assign choices; they simply observe them. look.
Chapter 13 Notes Observational Studies and Experimental Design
Chapter 13 Observational Studies & Experimental Design.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 13 Experiments and Observational Studies.
Design Experimental Control. Experimental control allows causal inference (IV caused observed change in DV) Experiment has internal validity when it fulfills.
Experimental Research Validity and Confounds. What is it? Systematic inquiry that is characterized by: Systematic inquiry that is characterized by: An.
ECON ECON Health Economic Policy Lab Kem P. Krueger, Pharm.D., Ph.D. Anne Alexander, M.S., Ph.D. University of Wyoming.
Slide 13-1 Copyright © 2004 Pearson Education, Inc.
Experimental Design. Check in Exams? – Will be returned on Weds Proposal assignment – Posted on the wiki, due 11/9 Experimental design – Internal validity.
Experimental Design Presented By: Amber Atwater & Charlott Livingston.
Between groups designs (2) – outline 1.Block randomization 2.Natural groups designs 3.Subject loss 4.Some unsatisfactory alternatives to true experiments.
Part III Gathering Data.
1 Experimental Research Cause + Effect Manipulation Control.
Experiments and Causal Inference ● We had brief discussion of the role of randomized experiments in estimating causal effects earlier on. Today we take.
1 Evaluating Research This lecture ties into chapter 17 of Terre Blanche We know the structure of research Understand designs We know the requirements.
WHS AP Psychology Research Methods: Experiments. I CAN ANSWER How do psychologists use the scientific method to study behavior and mental processes? What.
Experimental Design Showing Cause & Effect Relationships.
BY: Nyshad Thatikonda Alex Tran Miguel Suarez. How to use this power point 1) Click on the box with the number. Best to click on the black part and not.
Experimental Research
Chapter 6 Research Validity. Research Validity: Truthfulness of inferences made from a research study.
Chapter 10 Experimental Research Gay, Mills, and Airasian 10th Edition
Chapter 8 – Lecture 6. Hypothesis Question Initial Idea (0ften Vague) Initial ObservationsSearch Existing Lit. Statement of the problem Operational definition.
Chapter 11.  The general plan for carrying out a study where the independent variable is changed  Determines the internal validity  Should provide.
The Experiment Chapter 7. Doing Experiments In Everyday Life Experiments in psychology use the same logic that guides experiments in biology or engineering.
Types of Experimental Designs (Educational research) True Experimental Quasi-Experimental.
Chapter 3 Surveys and Sampling © 2010 Pearson Education 1.
Experiments.  Labs (update and questions)  STATA Introduction  Intro to Experiments and Experimental Design 2.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
CHAPTER 9: Producing Data Experiments ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
1.3 Experimental Design. What is the goal of every statistical Study?  Collect data  Use data to make a decision If the process to collect data is flawed,
Experimental and Quasi-Experimental Research
CHAPTER 4 Designing Studies
EXPERIMENTAL RESEARCH
Experiments Why would a double-blind experiment be used?
RESEARCH METHODS 8-10% 250$ 250$ 250$ 250$ 500$ 500$ 500$ 500$ 750$
Research Methods 3. Experimental Research.
2 independent Groups Graziano & Raulin (1997).
Designing an Experiment
Chapter 6 Research Validity.
Experiments and Quasi-Experiments
Social Research Methods Experimental Research
Pre-post Double Blind Placebo Control Group Design
Experimental Design.
Experimental Design.
Experiments and Quasi-Experiments
Reminder for next week CUELT Conference.
Presentation transcript:

Experimental Design

More threats to internal validity… Instrumentation or measurement procedure If your way of measuring something changes over time, it could alter how your outcome is measured E.g. if how I ask the question “did you have sex” varies, responses may vary over time E.g. If grading papers, and my view of what a “good answer” changes as I read answers, this could change how I grade later versus earlier responses

Instrumentation? Avoid this by piloting your measure first and then sticking with it If you must change your measure then collect new data Keep track of changes in questions

Testing When testing or surveying people at baseline influences their responses E.g. giving someone a survey about sex could also operate as a way of teaching them about sex Giving someone a math test could also give them a chance to practice their math skills

Testing Avoid testing by Having a control group so that if you have a testing effect you can at least control for it Consider carefully the ways a survey or test may be a learning experience for respondents

Regression to the mean Extreme scores will be less extreme when tested again Very low scores will be less low Very high scores will be less high Why? If you have more extreme scores in one condition This could make it look like your intervention works or doesn’t work when they are less extreme at the second measurement

Avoiding regression to the mean If you have extreme scores use stratification or block randomization to make sure groups are equally balanced for scores Remember that simple randomization won’t always fix this, especially with a small sample

Placebo and demand characteristics When they think they are getting better, they feel better! We control for this using blinding—participants are blind to the group they are in But what about the researcher? Researchers may influence results by communicating expectations So where possible we use double-blind

Double blind When both the participant and the researcher are unaware of the treatment the participant receives This can be very difficult to achieve in psychology Why?

Confounding When a third variable accounts for the influence of your IV on your DV Many types of threats to internal validity end up functioning as confounds Placebo History Selection…etc. The point is that a confound is an unmeasured influence that is actually responsible for the effect

External validity? So for whom and under what circumstances is this treatment actually effective?

External validity To whom and under what conditions can results be generalized? A question of great practical and theoretical significance If your intervention only works under very specific conditions, is it really useful?

External validity--example A university clinic uses an intervention to treat depressed patients. --only patients diagnosed with depression alone --using graduate students who see only a few patients each week --each patient gets a 3 hour battery of tests plus an indepth diagnostic interview --graduate students who get weekly supervision to make sure they are maintaining the treatment approach --treatment is free --it takes place in a quiet clinic on an attractive university campus

Selection bias The whole sample is biased Not in a way that makes intervention different from control But in a way that makes them all different from the likely population E.g. most people with depression don’t ONLY have depression

Testing Testing won’t make the intervention and control groups different It may make their experiences different from those of people who get the treatment later E.g. a 3 hour battery of tests and interviews may itself be therapeutic under some circumstances

Reactive effects of experimental arrangements Most therapists don’t only see a few patients a week Most therapists don’t get weekly supervision to make sure they are maintaining protocol Most patients don’t get therapy in lovely quiet offices on pretty university campuses

How to avoid problems with external validity? Difficult The higher your internal validity—the more you control alllllll the factors that could muddy or influence your outcome The lower your external validity will be

Building external validity? Takes time You may have to “redo” the intervention several times, changing and varying and measuring the circumstances Start neat and tidy Then slowly add in and measure real-world messiness This can be costly and time consuming.

“pre experimental designs” Also called pilot studies Generally low in internal validity But a good place to start Cheaper Quicker You want to know you have something before you go to the trouble and expense of a full blown randomized experiment

Quasi-experimental designs Use experimental and control groups Do not use random assignment Why? May use “matching.” Matching on qualities of interest

Pilot studies Pretest-posttest No control group Just measure if your intervention scores change from baseline to post-test E.g. if I treated my depressed people with my intervention and just measured their improvement Why would I do this?

Pilot studies Post test only Ok for measuring an outcome E.g. the SAT could be considered a posttest only design Gives your achievement scores Gives no sense at all of what your achievement might have been before, how it changed, and what caused the change

Pilot Studies Static group design If I simply examined the outcomes of two different treatments I don’t control selection I don’t measure baseline Again, this can be a useful first step

Equivalent time sample design More succinctly known as single subject design May have one participant Design is: Baseline (no treatment) Treatment No treatment treatment

Single subject designs Quite common in behavioral research E.g. treatment for OCD Baseline-picking behavior Treatment—withhold “reinforcement”—behavior goes away Withdraw treatment (return to reinforcement) Behavior returns Treatment—again withhold reinforcement—behavior goes away

Single subject designs Are actually a very powerful experimental technique Commonly used in behavior analysis and treatment A good way to establish causality