Download presentation
Presentation is loading. Please wait.
Published byRoy Cook Modified over 8 years ago
1
Validity = accuracy of studyReliability = consistency of results
2
VALIDITY- Does your study reveal accurate and representative results?
3
Reliability Is the measure able to give consistent measures over time?
5
Reliability checks? Test-retest Split-half Inter-observer Convergence…replications
6
VALIDITY- Does your study reveal accurate and representative results? Internal Validity and External Validity –External Validity- generalizability Issues »Sampling »Return rates »…..
7
…External Validity Generalizeability –Across population –Across environments –Across time
8
Internal Validity
10
Convergent validity Different manipulations Different measuring devices Different populations…etc Replication
11
COMMON THREATS TO VALIDITY Selection bias –random selection and assignment –Pretesting and matching –Recruitment technique
12
History and Maturation Effects Changes in the DV over time Between pretest and post-test Or between test sessions –constancy
13
Mortality Subject loss Over time May reflect unintended effect of IV or DV Subject loss may result in selection bias
14
STOP FOR FUN! ESP in PSYCHOLOGY?? –Somewhat critical –Lacking empirical validation –Should we remain open-minded? –Let’s give it a shot
15
One of these symbols on each trial
16
STOP until test complete
17
What is Chance Performance? -1 out of 4 for each trial… -20 trials Chance performance = 5
18
Just for fun let’s try again..this time seriously!
19
answers
20
Statistical Regression Regression to the mean with repeated testing Watch out for extreme scores..they may be due to chance
21
Instrumentation Changes in DV due to changes in measurement, not due to IV
22
Sequence Effects-repeated testing effects Practice effects Test sensitization Boredom/fatigue Carry-over –Randomization and counterbalancing
23
Subject Bias Subjects can have attitude! RESPONSE SETS that produce inaccurate results –Soc desireability –Cooperitivity –Negativity –Volunteerism –-blindin-recruitment -deception or disguise lie detectors
24
Experimentor Bias Experimentors like to be correct! CONFIRMATIONAL BIAS Demand characteristics –constancy –Automation –Double blinding
25
How would you use blinding in a MJ drug study?
26
PLACEBO EFFECTS ARE REAL Expectation What might someone expect MJ to do? Beliefs/expectations can lead to real results….that are not related to the real effects of the IV.
27
DOUBLE-BLIND PLACEBO CONTROL DESIGNS Fake MJ?/ Ciggarette/ Pill other Constancy Can tease apart placebo effects from real drug effects.
28
RANGE EFFECTS Don’t forget the inability to see an affect of an IV on a DV does not necessarily mean there isn’t one. Ceiling effects Basement effects
29
ULTIMATE CONTROL Control procedures may be tailored for unique experiments Ultimate control may never be completely achieved
30
BEING A REAL PSYCHOLOGICAL SCIENTIST?? The crux- identification of potential confounds AND RIVAL HYPOTHESIS
31
Alternative explanations for experimental results Alternative to the research hypothesis
32
Consider the following Hampton Court Maze
33
The radial arm maze for rats
34
Training of rats on the RAM Initial freezing Adaptation and exploration Neophobia to food pellet Adaptation to food Rapid acquisition of task
35
RAM memory errors? Entrance into previously entered arm One foot? Two? Half-way? All the way?
36
Rats become quite good Rarely make a memory error Win-shift strategy
37
Maybe they take an algorithmic strategy Right-right-right etc
38
To control for strategies..forced choice procedures
39
Using forced-choice procedures No indication of algorithm Performance still very good
40
SO… What does rat performance in RAM show us?
41
Rival Hypothesis? Memory or sensory guidance? Intra-maze or extramaze guidance? Spatial memory or…..?
42
Motivation-deficits Sensory-deficits Attention-deficits
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.