Download presentation
Presentation is loading. Please wait.
1
Experiment Basics: Variables
Psych 231: Research Methods in Psychology
2
Reminders Journal Summary 1 due in labs this week
Don’t forget Quiz 6 (due Fri) Reminders
3
Many kinds of Variables
Independent variables (explanatory) Dependent variables (response) Scales of measurement Errors in measurement Extraneous variables Control variables Random variables Confound variables Many kinds of Variables
4
Scales of measurement Categorical variables Quantitative variables
Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale Categories Categories with order Scales of measurement
5
Interval Scale: Consists of ordered categories where all of the categories are intervals of exactly the same size. Example: Fahrenheit temperature scale With an interval scale, equal differences between numbers on the scale reflect equal differences in magnitude. However, Ratios of magnitudes are not meaningful. 20º 40º 20º increase The amount of temperature increase is the same 60º 80º 20º increase 40º “Not Twice as hot” 20º Scales of measurement
6
Scales of measurement Categorical variables Quantitative variables
Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale Categories Categories with order Ordered Categories of same size Scales of measurement
7
Ratio scale: An interval scale with the additional feature of an absolute zero point.
Ratios of numbers DO reflect ratios of magnitude. It is easy to get ratio and interval scales confused Example: Measuring your height with playing cards Kelvin scale of Thermodynamic temperature 0oK = oC = oF Scales of measurement
8
Ratio scale 8 cards high Scales of measurement
9
Interval scale 5 cards high Scales of measurement
10
Scales of measurement Ratio scale Interval scale 8 cards high
0 cards high means ‘as tall as the table’ 0 cards high means ‘no height’ Can say ”8 cards tall is 2 cards taller than 6 cards tall” Can say “8 cards tall is twice as tall as 4 cards tall” Can say ”8 cards tall is 2 cards taller than 6 cards tall” Can NOT say “8 cards tall is twice as tall as 4 cards tall” Scales of measurement
11
Scales of measurement Categorical variables Quantitative variables
Nominal scale Ordinal scale Quantitative variables Interval scale Ratio scale Categories Categories with order Ordered Categories of same size Ordered Categories of same size with zero point “Best” Scale? Given a choice, usually prefer highest level of measurement possible Scales of measurement
12
Measuring your dependent variables
Scales of measurement Errors in measurement Reliability & Validity Sampling error Measuring your dependent variables
13
Measuring the true score
Example: Measuring intelligence? How do we measure the construct? How good is our measure? How does it compare to other measures of the construct? Is it a self-consistent measure? Internet IQ tests: Are they valid? (The Guardian Nov. 2013) Measuring the true score
14
Errors in measurement In search of the “true score” Reliability
Do you get the same value with multiple measurements? Consistency – getting roughly the same results under similar conditions Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error) Errors in measurement
15
Bull’s eye = the “true score” for the construct
e.g., a person’s Intelligence Dart Throw = a measurement e.g., trying to measure that person’s Intelligence Dartboard analogy
16
Dartboard analogy unreliable invalid - The dots are spread out
Reliability = consistency Validity = measuring what is intended Bull’s eye = the “true score” for the construct Measurement error Estimate of true score Estimate of true score = average of all of the measurements unreliable invalid - The dots are spread out - The & are different Dartboard analogy
17
Dartboard analogy reliable valid unreliable invalid reliable invalid
Bull’s eye = the “true score” Reliability = consistency Validity = measuring what is intended biased reliable valid unreliable invalid reliable invalid Dartboard analogy
18
Errors in measurement In search of the “true score” Reliability
Do you get the same value with multiple measurements? Consistency – getting roughly the same results under similar conditions Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error) Errors in measurement
19
Reliability True score + measurement error
A reliable measure will have a small amount of error Multiple “kinds” of reliability Test-retest Internal consistency Inter-rater reliability Reliability
20
Reliability Test-restest reliability
Test the same participants more than once Measurement from the same person at two different times Should be consistent across different administrations Reliable Unreliable Reliability
21
Reliability Internal consistency reliability
Multiple items testing the same construct Extent to which scores on the items of a measure correlate with each other Cronbach’s alpha (α) Split-half reliability Correlation of score on one half of the measure with the other half (randomly determined) Reliability
22
Reliability Inter-rater reliability At least 2 raters observe behavior
Extent to which raters agree in their observations Are the raters consistent? Requires some training in judgment Not very funny Funny 5:00 4:56 Reliability
23
Errors in measurement In search of the “true score” Reliability
Do you get the same value with multiple measurements? Consistency – getting roughly the same results under similar conditions Validity Does your measure really measure the construct? Is there bias in our measurement? (systematic error) Errors in measurement
24
Does your measure really measure what it is supposed to measure (the construct)?
There are many “kinds” of validity Validity
25
Many kinds of Validity VALIDITY CONSTRUCT INTERNAL EXTERNAL FACE
CRITERION- ORIENTED PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
26
Many kinds of Validity VALIDITY CONSTRUCT INTERNAL EXTERNAL
FACE CRITERION- ORIENTED “The degree to which a study provides causal information about behavior.” “The degree to which the results of a study apply to individuals and realistic behaviors outside of the study.” PREDICTIVE CONVERGENT CONCURRENT DISCRIMINANT Many kinds of Validity
27
At the surface level, does it look as if the measure is testing the construct?
“This guy seems smart to me, and he got a high score on my IQ measure.” Face Validity
28
Usually requires multiple studies, a large body of evidence that supports the claim that the measure really tests the construct Construct Validity
29
Internal Validity The precision of the results
Did the change in the DV result from the changes in the IV or does it come from something else? Internal Validity
30
Threats to internal validity
Experimenter bias & reactivity History – an event happens the experiment Maturation – participants get older (and other changes) Selection – nonrandom selection may lead to biases Mortality (attrition) – participants drop out or can’t continue Regression toward the mean – extreme performance is often followed by performance closer to the mean The SI cover jinx | Madden Curse Threats to internal validity
31
Are experiments “real life” behavioral situations, or does the process of control put too much limitation on the “way things really work?” Example: Measuring driving while distracted External Validity
32
External Validity Variable representativeness
Relevant variables for the behavior studied along which the sample may vary Subject representativeness Characteristics of sample and target population along these relevant variables Setting representativeness Ecological validity - are the properties of the research setting similar to those outside the lab External Validity
33
Measuring your dependent variables
Scales of measurement Errors in measurement Reliability & Validity Sampling error Measuring your dependent variables
34
Sampling Errors in measurement Sampling error
Population Everybody that the research is targeted to be about The subset of the population that actually participates in the research Sample Sampling
35
Sampling Population Sampling to make data collection manageable
Inferential statistics used to generalize back Sampling to make data collection manageable Sample Allows us to quantify the Sampling error Sampling
36
Sampling Goals of “good” sampling: Key tool: Random selection
Maximize Representativeness: To what extent do the characteristics of those in the sample reflect those in the population Reduce Bias: A systematic difference between those in the sample and those in the population Key tool: Random selection Sampling
37
Sampling Methods Probability sampling Non-probability sampling
Simple random sampling Systematic sampling Stratified sampling Non-probability sampling Convenience sampling Quota sampling Have some element of random selection Susceptible to biased selection Sampling Methods
38
Simple random sampling
Every individual has a equal and independent chance of being selected from the population Simple random sampling
39
Selecting every nth person
Systematic sampling
40
Cluster sampling Step 1: Identify groups (clusters)
Step 2: randomly select from each group Cluster sampling
41
Use the participants who are easy to get
Convenience sampling
42
Quota sampling Step 1: identify the specific subgroups
Step 2: take from each group until desired number of individuals Quota sampling
43
Variables Independent variables Dependent variables
Measurement Scales of measurement Errors in measurement Extraneous variables Control variables Random variables Confound variables Variables
44
Extraneous Variables Control variables
Holding things constant - Controls for excessive random variability Random variables – may freely vary, to spread variability equally across all experimental conditions Randomization A procedure that assures that each level of an extraneous variable has an equal chance of occurring in all conditions of observation. Confound variables Variables that haven’t been accounted for (manipulated, measured, randomized, controlled) that can impact changes in the dependent variable(s) Co-varys with both the dependent AND an independent variable Extraneous Variables
45
Colors and words Divide into two groups:
men women Instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. Women first. Men please close your eyes. Okay ready? Colors and words
46
Blue Green Red Purple Yellow List 1
47
Okay, now it is the men’s turn.
Remember the instructions: Read aloud the COLOR that the words are presented in. When done raise your hand. Okay ready?
48
Blue Green Red Purple Yellow List 2
49
So why the difference between the results for men versus women?
Is this support for a theory that proposes: “Women are good color identifiers, men are not” Why or why not? Let’s look at the two lists. Our results
50
List 2 Men List 1 Women Blue Green Red Purple Yellow Blue Green Red
Matched Mis-Matched
51
What resulted in the performance difference?
Our manipulated independent variable (men vs. women) The other variable match/mis-match? Because the two variables are perfectly correlated we can’t tell This is the problem with confounds Blue Green Red Purple Yellow Blue Green Red Purple Yellow IV DV Confound Co-vary together
52
What DIDN’T result in the performance difference?
Extraneous variables Control # of words on the list The actual words that were printed Random Age of the men and women in the groups These are not confounds, because they don’t co-vary with the IV Blue Green Red Purple Yellow Blue Green Red Purple Yellow
53
“Debugging your study”
Pilot studies A trial run through Don’t plan to publish these results, just try out the methods Manipulation checks An attempt to directly measure whether the IV variable really affects the DV. Look for correlations with other measures of the desired effects. “Debugging your study”
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.