Download presentation
Presentation is loading. Please wait.
Published byApril Bryant Modified over 9 years ago
1
LEARNING GOAL 1.2: DESIGN AN EFFECTIVE PSYCHOLOGICAL EXPERIMENT THAT ACCOUNTS FOR BIAS, RELIABILITY, AND VALIDITY Experimental Design
2
Variables Independent variable: the condition that you change to see if it will have an effect Dependent variable: the result you measure to look for a change Experimental research questions often take the form of “How does (IV) affect (DV)?” Confounding variables: other things that are uncontrolled and could distort the relationship between your IV and DV (these abound in psychology!)
3
Identifying Variables You want to see if drinking soda affects dancing ability. What is your IV? Your DV? Your confounds? You want to test whether sweet foods influence mood. What is your IV? Your DV? Your confounds? You want to see if people perform differently on tests if they’re reminded of stereotypes beforehand. What is your IV? What is your DV? What are confounds?
4
Conditions Experimental: receives the treatment (the manipulated IV) Control: doesn’t receive the treatment May have more than one of either type of condition
5
Designing Control Conditions You want to see if drinking soda affects dancing ability. What should your experimental condition(s) be? What should your control condition(s) be? You want to test whether sweet foods influence mood. What is your experimental condition(s)? Control condition(s)? You want to see if people perform differently on tests if they’re reminded of stereotypes beforehand. What should your experimental and control groups do?
6
Three Sources of Error Observer Error Participant Error Administrative Error
7
Observer Error Example: Confirmation bias (noticing only what supports his/her theory) How to prevent it: Random assignment: randomly choose which participant will be assigned to which condition Double-blind procedure: neither the participant nor the experimenter knows which group the participant is in; may be aided by use of a placebo: an inactive treatment that looks similar to the experimental treatment
8
Participant Error Examples: Demand characteristics (trying to give “good” data) Social desirability bias (trying to “look good”) How to prevent it: Random assignment Double-blind procedure and placebos Ensuring the participant doesn’t feel watched or judged
9
Administrative Error Example: Variations in how the study is performed How to prevent it: Double-blind procedure and placebos Strict scripting and clearly defined protocols
10
Designing Measures We want our measurements of our independent and dependent variables to be… Reliable Valid
11
Test Reliability A measurement is reliable if it gets consistent results Test-retest reliability: a participant who completes the task multiple times keeps giving pretty similar results Inter-rater reliability: two evaluators would both score the results the same way
12
Test Validity A measurement is valid if it actually measures what it’s supposed to measure Say you designed a test to measure intelligence based on shoe size. Such a test would be reliable – shoe sizes follow a pretty universal standard – but it could in no way predict intelligence, so it wouldn’t be valid.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.