Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10:00 - 10:50 Mondays, Wednesdays.

Similar presentations


Presentation on theme: "Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10:00 - 10:50 Mondays, Wednesdays."— Presentation transcript:

1 Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10: :50 Mondays, Wednesdays & Fridays. Welcome

2

3 By the end of lecture today 10/31/16
Use this as your study guide Logic of hypothesis testing with t-tests Steps for hypothesis testing for t-tests Levels of significance (alpha) what does alpha of .05 mean? what does p < 0.05 mean? what does alpha of .01 mean? what does p < 0.01 mean? Using Excel to complete t-tests

4 Please hand in homework assignment 17 now
Homework Assignments Please hand in homework assignment 17 now Homework Assignment 17: Hypothesis Testing Type I versus Type II Errors Due Monday October 31st Homework Assignment 18 Hypothesis testing using z scores and t scores Comparing Two means (Single sample and population mean) Due Wednesday November 2nd

5 Before next exam (November 18th)
Please read chapters in OpenStax textbook Please read Chapters 2, 3, and 4 in Plous Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence

6 Lab sessions Everyone will want to be enrolled
in one of the lab sessions Labs continue this week With Project 3

7

8

9

10

11 The three parts to the summary
1. State independent and dependent variables and means for each group 2. Describe type of test, and whether significance was found 3. Finish with statistical summary F(df, df) = observed; p< 0.05 (or “n.s.”) Start summary with means for each level of the IV The average number of cookies sold for three different incentives were compared. The mean number of cookie boxes sold for the “Hawaii” incentive was 14 , the mean number of cookies boxes sold for the “Bicycle” incentive was 12, and the mean number of cookies sold for the “No” incentive was 10. An ANOVA was conducted and there appears to be no significant difference in the number of cookies sold as a result of the different levels of incentive F(2, 12) = 2.73; n.s. Describe type of test and results n.s. = “not significant” p < 0.05 = “significant” Type Study Degree of freedom Observed Score Statistical Summary 11

12

13 Review Rejecting the null hypothesis
The result is “statistically significant” if: the observed statistic is larger than the critical statistic observed stat > critical stat If we want to reject the null, we want our t (or z or r or F or x2) to be big!! the p value is less than 0.05 (which is our alpha) p < If we want to reject the null, we want our “p” to be small!! we reject the null hypothesis then we have support for our alternative hypothesis Review

14 Deciding whether or not to reject the null hypothesis. 05 versus
Deciding whether or not to reject the null hypothesis .05 versus .01 alpha levels What if our observed z = 2.0? How would the critical z change? α = 0.05 Significance level = .05 α = 0.01 Significance level = .01 -1.96 or +1.96 p < 0.05 Yes, Significant difference Reject the null Remember, reject the null if the observed z is bigger than the critical z -2.58 or +2.58 Not a Significant difference Do not Reject the null Review

15 One versus two tail test of significance 5% versus 1% alpha levels
What if our observed z = 2.45? How would the critical z change? One-tailed Two-tailed α = 0.05 Significance level = .05 α = 0.01 Significance level = .01 -1.64 or +1.64 -1.96 or +1.96 Remember, reject the null if the observed z is bigger than the critical z Reject the null Reject the null -2.33 or +2.33 -2.58 or +2.58 Reject the null Do not Reject the null Review

16 Remember, you should know these four formulas by heart
“SS” = “Sum of Squares” “SS” = “Sum of Squares” “n” = number of scores “SS” = “Sum of Squares” “SS” = “Sum of Squares” “SS” = “Sum of Squares” “df” = degrees of freedom “SS” = “Sum of Squares” Remember, you should know these four formulas by heart

17 Remember, you should know these four formulas by heart

18 We lose one degree of freedom for every parameter we estimate
Degrees of Freedom Degrees of Freedom (d.f.) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

19 Five steps to hypothesis testing
Step 1:Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule: find “critical score” z score: Alpha level? (α = .05 vs .01) Prediction (one vs two-tailed) t score: Alpha level? (α = .05 vs .01) Prediction (one vs two-tailed) Degrees of freedom Population versus sample standard deviation Population versus sample standard deviation Step 3: Calculations Step 4: Make decision - If calculated score > critical score then reject null Step 5: Conclusion - tie findings back in to research problem State IV, DV and means - Type of test and whether significant – Symbolic summary

20 Comparing z score distributions with t-score distributions
z-scores Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom

21 Comparing z score distributions with t-score distributions
Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

22 Comparing z score distributions with t-score distributions
Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores Comparing z score distributions with t-score distributions Differences include: We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table)

23 A quick re-visit with the law of large numbers
Relationship between increased sample size decreased variability smaller “critical values” As n goes up variability goes down Remember these useful values for z-scores? . 1.64 1.96 2.58

24 Law of large numbers: As the number of measurements
increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group)

25 A note on z scores, and t score:
. . A note on z scores, and t score: Numerator is always distance between means (how far away the distributions are or “effect size”) Denominator is always measure of variability (how wide or much overlap there is between distributions) Difference between means Difference between means Variability of curve(s) (within group variability) Variability of curve(s)

26 Effect size is considered relative to variability of distributions
. Effect size is considered relative to variability of distributions 1. Larger variance harder to find significant difference Treatment Effect x Treatment Effect 2. Smaller variance easier to find significant difference x

27 Effect size is considered relative to variability of distributions
. Effect size is considered relative to variability of distributions Treatment Effect x Difference between means Treatment Effect x Variability of curve(s) (within group variability)

28 A note on variability versus effect size Difference between means
. A note on variability versus effect size Difference between means Difference between means Variability of curve(s) Variability of curve(s) (within group variability)

29 A note on variability versus effect size Difference between means
. A note on variability versus effect size Difference between means Difference between means . Variability of curve(s) Variability of curve(s) (within group variability)

30 Thank you! See you next time!!


Download ppt "Introduction to Statistics for the Social Sciences SBS200 - Lecture Section 001, Fall 2016 Room 150 Harvill Building 10:00 - 10:50 Mondays, Wednesdays."

Similar presentations


Ads by Google