Psych 231: Research Methods in Psychology

Slides:



Advertisements
Similar presentations
Using Statistics in Research Psych 231: Research Methods in Psychology.
Advertisements

Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences Psychology 340 Fall 2006 Hypothesis testing.
Understanding Statistics in Research
Statistics for the Social Sciences Psychology 340 Spring 2005 Hypothesis testing.
Statistics for the Social Sciences Psychology 340 Fall 2006 Hypothesis testing.
Statistics for the Social Sciences
Using Statistics in Research Psych 231: Research Methods in Psychology.
Using Statistics in Research Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences
Tuesday, September 10, 2013 Introduction to hypothesis testing.
Talks & Statistics (wrapping up) Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Statistics Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistics for the Social Sciences Psychology 340 Fall 2012 Analysis of Variance (ANOVA)
Statistics Psych 231: Research Methods in Psychology.
Statistics Psych 231: Research Methods in Psychology.
Finishing up: Statistics & Developmental designs Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Statistics Psych 231: Research Methods in Psychology.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Inferential Statistics Psych 231: Research Methods in Psychology.
Descriptive Statistics Psych 231: Research Methods in Psychology.
Inferential Statistics Psych 231: Research Methods in Psychology.
Psych 231: Research Methods in Psychology
Statistics for the Social Sciences
Inferential Statistics
Non-Experimental designs
Central Limit Theorem, z-tests, & t-tests
Reasoning in Psychology Using Statistics
Statistics for the Social Sciences
Small-N designs & Basic Statistical Concepts
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Reasoning in Psychology Using Statistics
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Statistics for the Social Sciences
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Descriptive Statistics
Statistics for the Social Sciences
What are their purposes? What kinds?
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Inferential Statistics
Psych 231: Research Methods in Psychology
Non-Experimental designs: Developmental and Small-N Designs
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Descriptive Statistics
Reasoning in Psychology Using Statistics
Reasoning in Psychology Using Statistics
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Reasoning in Psychology Using Statistics
Presentation transcript:

Psych 231: Research Methods in Psychology Statistics (cont.) Psych 231: Research Methods in Psychology

Quiz 9 is due on Friday at midnight Announcements

Statistics 2 General kinds of Statistics Descriptive statistics Used to describe, simplify, & organize data sets Describing distributions of scores Inferential statistics Used to test claims about the population, based on data gathered from samples Takes sampling error into account. Are the results above and beyond what you’d expect by random chance? Population Sample A Treatment X = 80% Sample B No Treatment X = 76% Inferential statistics used to generalize back Statistics

Inferential Statistics Two approaches Hypothesis Testing “There is a statistically significant difference between the two groups” Confidence Intervals “The mean difference between the two groups is between 4% ± 2%” Population Sample A Treatment X = 80% Sample B No Treatment X = 76% Inferential statistics used to generalize back Inferential Statistics

Testing Hypotheses Step 1: State your hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Step 5: Make a decision about your null hypothesis “Reject H0” “Fail to reject H0” Testing Hypotheses

Type I error: concluding that there is an effect (a difference between groups) when there really isn’t. Sometimes called “significance level” We try to minimize this (keep it low) Pick a low level of alpha Psychology: 0.05 and 0.01 most common For Step 5, we compare a “p-value” of our test to the alpha level to decide whether to “reject” or “fail to reject” to H0 Type II error: concluding that there isn’t an effect, when there really is. Related to the Statistical Power of a test How likely are you able to detect a difference if it is there Error types Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error

Testing Hypotheses Step 1: State your hypotheses Step 2: Set your decision criteria Step 3: Collect your data from your sample(s) Step 4: Compute your test statistics Descriptive statistics (means, standard deviations, etc.) Inferential statistics (t-tests, ANOVAs, etc.) Step 5: Make a decision about your null hypothesis Reject H0 “statistically significant differences” Fail to reject H0 “not statistically significant differences” Make this decision by comparing your test’s “p-value” against the alpha level that you picked in Step 2. “Statistically significant differences” Essentially this means that the observed difference is above what you’d expect by chance (standard error) Testing Hypotheses

Step 4: “Generic” statistical test Tests the question: Are there differences between groups due to a treatment? H0 is true (no treatment effect) Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error Two possibilities in the “real world” One population Two sample distributions XA XB 76% 80% Step 4: “Generic” statistical test

Step 4: “Generic” statistical test Tests the question: Are there differences between groups due to a treatment? Real world (‘truth’) H0 is correct H0 is wrong Experimenter’s conclusions Reject H0 Fail to Reject H0 Type I error Type II error Two possibilities in the “real world” H0 is true (no treatment effect) H0 is false (is a treatment effect) Two populations XA XB XB XA 76% 80% 76% 80% People who get the treatment change, they form a new population (the “treatment population) Step 4: “Generic” statistical test

Step 4: “Generic” statistical test XB XA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment Why might the samples be different? (What is the source of the variability between groups)? Step 4: “Generic” statistical test

Step 4: “Generic” statistical test XB XA ER: Random sampling error ID: Individual differences (if between subjects factor) TR: The effect of a treatment The generic test statistic - is a ratio of sources of variability Observed difference TR + ID + ER ID + ER Computed test statistic = = Difference from chance Step 4: “Generic” statistical test

Step 4: “Generic” statistical test The generic test statistic distribution Observed difference TR + ID + ER ID + ER Computed test statistic = = Difference from chance Distribution of sample means Step 4: “Generic” statistical test

Step 4: “Generic” statistical test The generic test statistic distribution TR + ID + ER ID + ER Distribution of the test statistic Test statistic Transformation could be a z-score, t-test, F-ratio (ANOVA) Distribution of sample means Step 4: “Generic” statistical test

Step 4: “Generic” statistical test The generic test statistic distribution To reject the H0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion TR + ID + ER ID + ER Distribution of the test statistic Test statistic Distribution of sample means α-level determines where these boundaries go Step 4: “Generic” statistical test

Step 4: “Generic” statistical test The generic test statistic distribution To reject the H0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Obs test stat in here Reject H0 2.5% 2.5% “two-tailed” with α = 0.05 Observed test statistic in here Fail to Reject H0 Step 4: “Generic” statistical test

Step 4: “Generic” statistical test The generic test statistic distribution To reject the H0, you want a computed test statistics that is large reflecting a large Treatment Effect (TR) What’s large enough? The alpha level gives us the decision criterion Distribution of the test statistic Obs test stat in here Reject H0 “One tailed test”: sometimes you know to expect a particular difference (e.g., “improve memory performance”) 5.0% “one-tailed” with α = 0.05 Observed test statistic in here Fail to Reject H0 Step 4: “Generic” statistical test

Step 4: “Generic” statistical test Things that affect the computed test statistic Size of the treatment effect (effect size) The bigger the effect, the bigger the computed test statistic TR + ID + ER ID + ER TR + ID + ER ID + ER XB XA XB XA Step 4: “Generic” statistical test

Step 4: “Generic” statistical test Things that affect the computed test statistic Size of the treatment effect (effect size) The bigger the effect, the bigger the computed test statistic Difference expected by chance (standard error) Variability in the population Sample size XB XA XB XA TR + ID + ER ID + ER Step 4: “Generic” statistical test

Error bars Two types typically Standard Error (SE) diff by chance Confidence Intervals (CI) A range of plausible estimates of the population mean CI: μ = (X) ± (tcrit) (diff by chance) Error bars

Some inferential statistical tests 1 factor with two groups T-tests Between groups: 2-independent samples Within groups: Repeated measures samples (matched, related) 1 factor with more than two groups Analysis of Variance (ANOVA) (either between groups or repeated measures) Multi-factorial Factorial ANOVA Some inferential statistical tests

T-test Design Formulae: Observed difference X1 - X2 T = 2 separate experimental conditions Degrees of freedom Based on the size of the sample and the kind of t-test Formulae: Observed difference T = X1 - X2 Diff by chance Based on sampling error Computation differs for between and within t-tests CI: μ=(X1-X2)±(tcrit)(Diff by chance) T-test

T-test Reporting your results The observed difference between conditions Kind of t-test Computed T-statistic Degrees of freedom for the test The “p-value” of the test “The mean of the treatment group was 12 points higher than the control group. An independent samples t-test yielded a significant difference, t(24) = 5.67, p < 0.05, 95% CI [7.62, 16.38]” “The mean score of the post-test was 12 points higher than the pre-test. A repeated measures t-test demonstrated that this difference was significant significant, t(12) = 7.50, p < 0.05, 95% CI [8.51, 15.49].” T-test

Analysis of Variance (ANOVA) XB XA XC Designs More than two groups 1 Factor ANOVA, Factorial ANOVA Both Within and Between Groups Factors Test statistic is an F-ratio Degrees of freedom Several to keep track of The number of them depends on the design Analysis of Variance (ANOVA)

Analysis of Variance (ANOVA) XB XA XC More than two groups Now we can’t just compute a simple difference score since there are more than one difference So we use variance instead of simply the difference Variance is essentially an average difference Observed variance Variance from chance F-ratio = Analysis of Variance (ANOVA)

1 factor ANOVA 1 Factor, with more than two levels XB XA XC Now we can’t just compute a simple difference score since there are more than one difference A - B, B - C, & A - C 1 factor ANOVA

1 factor ANOVA The ANOVA tests this one!! XA = XB = XC XA ≠ XB ≠ XC Null hypothesis: H0: all the groups are equal The ANOVA tests this one!! XA = XB = XC Do further tests to pick between these Alternative hypotheses HA: not all the groups are equal XA ≠ XB ≠ XC XA ≠ XB = XC XA = XB ≠ XC XA = XC ≠ XB 1 factor ANOVA

1 factor ANOVA Planned contrasts and post-hoc tests: - Further tests used to rule out the different Alternative hypotheses XA ≠ XB ≠ XC Test 1: A ≠ B XA = XB ≠ XC Test 2: A ≠ C XA ≠ XB = XC Test 3: B = C XA = XC ≠ XB 1 factor ANOVA

1 factor ANOVA Reporting your results The observed differences Kind of test Computed F-ratio Degrees of freedom for the test The “p-value” of the test Any post-hoc or planned comparison results “The mean score of Group A was 12, Group B was 25, and Group C was 27. A 1-way ANOVA was conducted and the results yielded a significant difference, F(2,25) = 5.67, p < 0.05. Post hoc tests revealed that the differences between groups A and B and A and C were statistically reliable (respectively t(1) = 5.67, p < 0.05 & t(1) = 6.02, p < 0.05). Groups B and C did not differ significantly from one another” 1 factor ANOVA

We covered much of this in our experimental design lecture More than one factor Factors may be within or between Overall design may be entirely within, entirely between, or mixed Many F-ratios may be computed An F-ratio is computed to test the main effect of each factor An F-ratio is computed to test each of the potential interactions between the factors Factorial ANOVAs

Factorial design example Consider the results of our class experiment Main effect of cell phone ✓ Main effect of site type ✓ An Interaction between cell phone and site type ✓ -0.78 0.04 Factorial design example Resource: Dr. Kahn's reporting stats page

Factorial ANOVAs Reporting your results The observed differences Because there may be a lot of these, may present them in a table instead of directly in the text Kind of design e.g. “2 x 2 completely between factorial design” Computed F-ratios May see separate paragraphs for each factor, and for interactions Degrees of freedom for the test Each F-ratio will have its own set of df’s The “p-value” of the test May want to just say “all tests were tested with an alpha level of 0.05” Any post-hoc or planned comparison results Typically only the theoretically interesting comparisons are presented Factorial ANOVAs