Presentation is loading. Please wait.

Presentation is loading. Please wait.

Research Methods Unit 2.

Similar presentations


Presentation on theme: "Research Methods Unit 2."— Presentation transcript:

1 Research Methods Unit 2

2 Intro Psychology can be very silly (“pop-psychology”)
The trick—separate fact and opinion Remember the definition… Psychology = systematic study of behavior & thinking

3 Two terms Hindsight bias = looking back in time makes an event seem as though it were inevitable. “I knew it all along!” “Hindsight is ” “That’s common sense.” A simple study can put this to the test. 2 conflicting quotes, people agree with both. Overconfidence = when we are more confident than correct Another simple study. OCHSA Jumbled words, people think they can solve them fast, but can’t.

4 Scientific attitude Curiosity – you want to know
Skepticism – show me proof! Humility – admit when wrong

5 Scientific method Hypothesis – a prediction
Procedure – a stated research method Observation – objective data Conclusion – data proves/disprove the hypothesis

6 More terms Hypothesis = prediction that can be tested; researchers can be biased though Operational definition = tries to limit bias; has two parts: Precise statement of procedures Something measured numerically Think about this: suppose you want to answer, “Is plant fertilizer A or B better?” Offer an operational definition to answer this question. Numbers move us from the subjective to the objective Replication = ability to exactly duplicate the study

7 5 Research Methods The first 3 are “descriptive” or “qualitative” studies, then quantitative 1. Case Study = thorough study of one person, in hopes of learning about all people (think of a thick folder) + easy with 1 person; interesting - that 1 person may not represent all people 2. Survey = questions to many people + easy to quantify with Likert scale; can get lots of people - lacks depth; wording can influence results; random sampling may not occur

8 5 Research Methods (continued)
3. Naturalistic Observation = observing a person in his/her natural environment + easy to do - can be subjective (opinion-based); behavior/results can be influenced by the observer/study (the “Hawthorne Effect”) There are two quantitative research methods (next slide)

9 5 Research Methods (2 quantitative methods)
4. Correlation = statistical measure of how well two things go together (they “co-relate”) Two ways to view this: numerically & graphically Numerically – “correlation coefficients” go from -1.0 thru 1.0 0 is no correlation +1 or -1 is perfect correlation Ex.: 0.87 is a high correlation is also a high correlation. Graphically –scatterplots show correlations. “Tight” dots = high correlation “Loose” dots – low correlation

10 The Correlation Trap Correlations don’t suggest causation.
A and B might go together (correlate), but that doesn’t mean A causes B. You could easily flip it and say B causes A (and it sounds stupid). Ex.: Diet soda and tooth decay. They do correlate, but… Illusory Correlations – tendency to see two things going together, when they really don’t. Ex.: An astrology prediction seems to come true. My lucky baseball routine helps me hit the ball. (This is superstition.)

11 5 Research Methods (2 quantitative methods)
5. Experiments Often called the “gold standard” of research, because… …it shows causation. Ex.: eating a mint causes better math scores Random selection—getting people from the population named (like “middle school students”) Needs 2 groups: control and experimental Random assignment—putting people into control or experimental groups

12 Experiments (continued)
Double-blind procedure—neither the experimenter or subjects know which group is control/experimental This helps cut down on bias. Placebo effect—a fake treatment is often given; it often has an effect even though its totally fake! If you believe it will work, it often will work. Independent variable—what the experimenter manipulates. Ask, “What’s different between the two groups?” That’s the IV. Dependent variable—what the IV supposedly affects. (a mint might affect math scores) Ask, “What’s being measured?” That’s the DV. Confounding variables—anything that can whack out your experiment. (maybe the mint group got more time)

13 A sample experiment You want to see if high school students’ reaction time is affected by sugar-free chewing gum or by regular chewing gum (with sugar). Procedure? Random selection Random assignment IV = DV = Identify the parts above.

14 3 misc. study types, & 1 more thing
Longitudinal study Measures changes over time with the same group of people. Think about a long, long timeline. A longitudinal timeline. Cross-sectional study Measures differences between age groups. It can be done in one day. Meta-analysis – is a study of many studies (the bottom line) Ethical guidelines (set by the APA), such as... “do no harm” “informed consent” – you give the ok, you can walk away “debriefing” – they talk to you afterwards

15 Statistics 101 Three measures of central tendency Mean = the average
Median = the “middle” number. Must put them in order 1st, pick middle number. If there’s an even number of numbers, average the middle two. Mode = the most frequent number. 2, 3, 4, 4, 5, 8 Mean = 26/6 = 4.33 Median = 4 Mode = 4

16 Statistics 101 (continued)
Measures of variation Range = distance between lowest and highest numbers (2, 3, 4, 4, 5, 8) Standard deviation = how much the numbers vary from the mean Normal curve = AKA the “bell curve”, a graph that often occurs in nature (things like height and IQ). Percentile = % of people at or below a set level. (I did better than ___%.)

17 Statistics 101 (continued)

18 Statistics 101 terms Shown as a p value. Like, p = .07 or p < .05.
Validity = the test measures what it’s supposed to measure. (Bathroom scale is valid for weight, tape measure is invalid for weight.) Reliability = the test consistently yields the same results. (Bathroom scale that gives wildly different results is not reliable.) Statistical significance (a bit tricky) It does NOT mean…Wow! That’s a really significant number! It DOES mean…the observed effect was NOT due to chance. (DV should be due to the IV, not chance). Statistical significance is usually set at 5%. Shown as a p value. Like, p = .07 or p < .05. If p > .05, there’s more than a 5% chance that the observed effect was caused merely by chance. This is NOT statistically significant. If p < .05, there’s less than a 5% chance that the observed effect was caused merely by chance. This IS statistically significant. Bottom line…low p is good  it’s sign. High p is bad  not sign.


Download ppt "Research Methods Unit 2."

Similar presentations


Ads by Google