Download presentation
Presentation is loading. Please wait.
Published byEspen Jenssen Modified over 5 years ago
1
Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Spring 2019 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays & Fridays. March 18
2
Even if you have not yet registered your clicker you can still participate
The Green Sheets
3
Before next exam (April 5th)
Schedule of readings Before next exam (April 5th) Please read chapters in OpenStax textbook Please read Chapters 2, 3, and 4 in Plous Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence
4
Labs continue this week
Lab sessions Everyone will want to be enrolled in one of the lab sessions Labs continue this week
6
Let’s do ANOVA Using Excel
A girlscout troop leader wondered whether providing an incentive to whomever sold the most girlscout cookies would have an effect on the number cookies sold. She provided a big incentive to one troop (trip to Hawaii), a lesser incentive to a second troop (bicycle), and no incentive to a third group, and then looked to see who sold more cookies. Troop 1 (Nada) 10 8 12 7 13 Troop 2 (bicycle) 12 14 10 11 13 Troop 3 (Hawaii) 14 9 19 13 15 n = 5 x = 10 n = 5 x = 12 n = 5 x = 14
7
Let’s do one Replication of study (new data)
8
Let’s do same problem Using MS Excel
9
Let’s do same problem Using MS Excel
10
Means for Each group
11
No, so it is not significant Do not reject null
F critical (is observed F greater than critical F?) P-value (is it less than .05?)
12
Make decision whether or not to reject null hypothesis
Observed F = 2.73 Critical F(2,12) = 3.89 2.7 is not farther out on the curve than 3.89 so, we do not reject the null hypothesis Also p-value is not smaller than 0.05 so we do not reject the null hypothesis Step 6: Conclusion: There appears to be no effect of type of incentive on number of girl scout cookies sold
13
Make decision whether or not to reject null hypothesis
Observed F = 2.73 F(2,12) = 2.73; n.s. Critical F(2,12) = 3.89 2.7 is not farther out on the curve than 3.89 so, we do not reject the null hypothesis Conclusion: There appears to be no effect of type of incentive on number of girl scout cookies sold The average number of cookies sold for three different incentives were compared. The mean number of cookie boxes sold for the “Hawaii” incentive was 14 , the mean number of cookies boxes sold for the “Bicycle” incentive was 12, and the mean number of cookies sold for the “No” incentive was 10. An ANOVA was conducted and there appears to be no significant difference in the number of cookies sold as a result of the different levels of incentive F(2, 12) = 2.73; n.s.
17
Which is worse? Type I or type II error
. Which is worse? Type I or type II error What if we were looking to see if an individual were guilty of a crime? Two ways to be correct: Say they are guilty when they are guilty Say they are not guilty when they are innocent Two ways to be incorrect: Say they are guilty when they are not Say they are not guilty when they are What would null hypothesis be? This person is innocent - there is no crime here Type I error: Rejecting a true null hypothesis Saying the person is guilty when they are not (false alarm) Sending an innocent person to jail (& guilty person to stays free) Type II error: Not rejecting a false null hypothesis Saying the person in innocent when they are guilty (miss) Allowing a guilty person to stay free
18
Five steps to hypothesis testing
Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule Alpha level? (α = .05 or .01)? One or two tailed test? Balance between Type I versus Type II error Critical statistic (e.g. z or t or F or r) value? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem
19
We lose one degree of freedom for every parameter we estimate
Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.
20
Comparing z score distributions with t-score distributions
z-scores Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom
21
Comparing z score distributions with t-score distributions
Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution
22
Comparing z score distributions with t-score distributions
Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table)
23
We use degrees of freedom (df) to approximate sample size
Interpreting t-table We use degrees of freedom (df) to approximate sample size Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions This t-table presents useful values for distributions (organized by degrees of freedom) Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve n = 17 n = 5 . Remember these useful values for z-scores? 1.64 1.96 2.58
24
Area between two scores Area between two scores
Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df
25
useful values for z-scores? .
Area between two scores Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df Notice with large sample size it is same values as z-score Remember these useful values for z-scores? . 1.96 2.58 1.64
26
Hypothesis testing: one sample t-test
Let’s jump right in and do a t-test Hypothesis testing: one sample t-test Is the mean of my observed sample consistent with the known population mean or did it come from some other distribution? We are given the following problem: 800 students took a chemistry exam. Accidentally, 25 students got an additional ten minutes. Did this extra time make a significant difference in the scores? The average number correct by the large class was 74. The scores for the sample of 25 was Please note: In this example we are comparing our sample mean with the population mean (One-sample t-test) 76, 72, 78, 80, 73 70, 81, 75, 79, 76 77, 79, 81, 74, 62 95, 81, 69, 84, 76 75, 77, 74, 72, 75
27
µ = 74 µ Hypothesis testing
Step 1: Identify the research problem / hypothesis Did the extra time given to this sample of students affect their chemistry test scores Describe the null and alternative hypotheses One tail or two tail test? Ho: µ = 74 = 74 H1:
28
We use a different table for t-tests
Hypothesis testing Step 2: Decision rule = .05 n = 25 Degrees of freedom (df) = (n - 1) = (25 - 1) = 24 two tail test This was for z scores We use a different table for t-tests
29
two tail test α= .05 (df) = 24 Critical t(24) = 2.064
30
µ = 74 Hypothesis testing = = 868.16 = 6.01 24 x (x - x) (x - x)2
76 72 78 80 73 70 81 75 79 77 74 62 95 69 84 76 – 76.44 72 – 76.44 78 – 76.44 80 – 76.44 73 – 76.44 70 – 76.44 81 – 76.44 75 – 76.44 79 – 76.44 77 – 76.44 74 – 76.44 62 – 76.44 95 – 76.44 69 – 76.44 84 – 76.44 = -0.44 = = = = = = = = = = = = = = = 0.1936 2.4336 2.0736 6.5536 0.3136 5.9536 Step 3: Calculations µ = 74 Σx = N 1911 25 = = 76.44 N = 25 = 6.01 868.16 24 Σx = 1911 Σ(x- x) = 0 Σ(x- x)2 =
31
µ = 74 Hypothesis testing = 76.44 - 74 1.20 2.03 .
Step 3: Calculations µ = 74 = 76.44 N = 25 s = 6.01 = 1.20 2.03 critical t 6.01 25 Observed t(24) = 2.03
32
Hypothesis testing Step 4: Make decision whether or not to reject null hypothesis Observed t(24) = 2.03 Critical t(24) = 2.064 2.03 is not farther out on the curve than 2.064, so, we do not reject the null hypothesis Step 6: Conclusion: The extra time did not have a significant effect on the scores
33
Hypothesis testing: Did the extra time given to these 25 students affect their average test score? Start summary with two means (based on DV) for two levels of the IV notice we are comparing a sample mean with a population mean: single sample t-test Finish with statistical summary t(24) = 2.03; ns Describe type of test (t-test versus z-test) with brief overview of results Or if it had been different results that *were* significant: t(24) = -5.71; p < 0.05 The mean score for those students who where given extra time was percent correct, while the mean score for the rest of the class was only 74 percent correct. A t-test was completed and there appears to be no significant difference in the test scores for these two groups t(24) = 2.03; n.s. Type of test with degrees of freedom n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Value of observed statistic 33
35
Preview of homework assignment
36
Preview of homework assignment
37
Thank you! See you next time!!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.