Download presentation
Presentation is loading. Please wait.
Published byTerence Carpenter Modified over 8 years ago
2
Introduction to Statistics for the Social Sciences SBS200, COMM200, GEOG200, PA200, POL200, or SOC200 Lecture Section 001, Spring 2016 Room 150 Harvill Building 9:00 - 9:50 Mondays, Wednesdays & Fridays
4
By the end of lecture today 3/21/16 Overview of Project 3 t-tests
5
Before next exam (April 8 th ) Please read chapters 1 - 11 in OpenStax textbook Please read Chapters 2, 3, and 4 in Plous Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence
6
On class website: Please complete homework worksheet #19 One-sample z and t hypothesis tests Due: Wednesday, March 23 rd Homework
7
Everyone will want to be enrolled in one of the lab sessions Labs continue this week, Designing Project 3
13
One versus two tail test of significance: Comparing different critical scores (but same alpha level – e.g. alpha = 5%) One versus two tailed test of significance How would the critical z change? Pros and cons… 5% 95% 2.5% 95% 2.5% Rev iew
14
-1.64 or +1.64 How would the critical z change? One-tailedTwo-tailed α = 0.05 Significance level =.05 α = 0.01 Significance level =.01 -1.96 or +1.96 -2.33 or +2.33 -2.58 or +2.58 What if our observed z = 2.0? Reject the null Do not Reject the null Remember, reject the null if the observed z is bigger than the critical z One versus two tail test of significance 5% versus 1% alpha levels Rev iew
15
-1.64 or +1.64 How would the critical z change? One-tailedTwo-tailed α = 0.05 Significance level =.05 α = 0.01 Significance level =.01 -1.96 or +1.96 -2.33 or +2.33 -2.58 or +2.58 What if our observed z = 2.45? Reject the null Do not Reject the null Remember, reject the null if the observed z is bigger than the critical z One versus two tail test of significance 5% versus 1% alpha levels Rev iew
16
Comparing Two Means? Use a t-test Study Type 2: t-test We are looking to compare two means http://www.youtube.com/watch?v=n4WQhJHGQB4
17
Comparing Two Means? Use a t-test Study Type 2: t-test We are looking to compare two means http://www.youtube.com/watch?v=n4WQhJHGQB4
18
. Hypothesis testing: A review If the observed stat is more extreme than the critical stat in the distribution (curve): then it is so rare, (taking into account the variability) we conclude it must be from some other distribution decision considers effect size and variability then we reject the null hypothesis – we have a significant result then we have support for our alternative hypothesis p < 0.05 (p < α ) If the observed stat is NOT more extreme than the critical stat in the distribution (curve): then we know it is a common score (either because the effect size is too small or because the variability is to big) and is likely to be part of this null distribution, we conclude it must be from this distribution decision considers effect size and variability – could be overly variable then we do not reject the null hypothesis then we do not have support for our alternative hypothesis p not less than 0.05 (p not less than α ) p is n.s. Variability of curve(s) Difference between means critical statistic Variability of curve(s) Difference between means Variability of curve(s) (within group variability) Review
19
Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule Alpha level? ( α =.05 or.01)? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem One or two tailed test? Balance between Type I versus Type II error Critical statistic (e.g. z or t or F or r) value?
20
Degrees of Freedom Degrees of Freedom ( d.f. ) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation. We lose one degree of freedom for every parameter we estimate
21
.. A note on z scores, and t score: Difference between means Variability of curve(s) Difference between means Numerator is always distance between means (how far away the distributions are or “effect size”) Denominator is always measure of variability (how wide or much overlap there is between distributions) Variability of curve(s) (within group variability)
22
. A note on variability versus effect size Difference between means Variability of curve(s) Variability of curve(s) (within group variability) Difference between means
23
.. Variability of curve(s) Variability of curve(s) (within group variability) Difference between means A note on variability versus effect size
24
. Effect size is considered relative to variability of distributions 1. Larger variance harder to find significant difference Treatment Effect Treatment Effect 2. Smaller variance easier to find significant difference x x
25
. Effect size is considered relative to variability of distributions Treatment Effect Treatment Effect x x Variability of curve(s) (within group variability) Difference between means
26
Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule: find “critical z” score Alpha level? ( α =.05 or.01)? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem Population versus sample standard deviation How is a t score different than a z score? One versus two-tailed test
27
Comparing z score distributions with t-score distributions Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) z-scores t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom
28
Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample
29
Comparing z score distributions with t-score distributions Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Critical t (just like critical z) separates common from rare scores Critical t used to define both common scores “confidence interval” and rare scores “region of rejection”
30
Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution
31
Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table) Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores
32
A quick re-visit with the law of large numbers Relationship between increased sample size decreased variability smaller “critical values” As n goes up variability goes down
33
Law of large numbers: As the number of measurements increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) http://www.youtube.com/watch?v=ne6tB2KiZuk With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group)
34
. Interpreting t-table Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions n = 17 n = 5 This t-table presents useful values for distributions (organized by degrees of freedom) 1.962.58 1.64 Remember these useful values for z-scores? We use degrees of freedom (df) to approximate sample size Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve
35
Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df
36
Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) Notice with large sample size it is same values as z-score. 1.962.58 1.64 Remember these useful values for z-scores? df
37
Degrees of Freedom Degrees of Freedom ( d.f. ) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.
38
Standard deviation and Variance For Sample and Population These would be helpful to know by heart – please memorize these formula Pop Quiz – Part 1
39
Standard deviation and Variance For Sample and Population Pop Quiz – Part 1 Part 2: When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - What affect does this have on the hypothesis test (easier or harder to reject the null?)
40
Pop Quiz – Part 3 1. When do we use a t-test and when do we use a z-test? (Be sure to write out the formulae) 2. How many steps in hypothesis testing (What are they?) 3. What is our formula for degrees of freedom in one sample t-test? 4. We lose one degree of freedom for every ________________ 5. What are the three parts to the summary (below) The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s.
41
Part 2: When we move from a two-tailed test to a one-tailed test what happens to the critical z score (bigger or smaller?) - Draw a picture - What affect does this have on the hypothesis test (easier or harder to reject the null?) Standard deviation and Variance For Sample and Population Pop Quiz Critical value gets smaller Gets easier to reject the null
42
Pop Quiz Writing Assignment 1. When do we use a t-test and when do we use a z-test? (Be sure to write out the formulae) Population versus sample standard deviation Use the t-test when you don’t know the standard deviation of the population, and therefore have to estimate it using the standard deviation of the sample
43
Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule: find “critical z” score Alpha level? ( α =.05 or.01)? Step 3: Calculate observed z score Step 4: Compare “observed z” with “critical z” If observed z > critical z then reject null p < 0.05 and we have significant finding Step 5: Conclusion - tie findings back in to research problem One versus two-tailed test How is a t score similar to a z score? Same logic and same steps How is a t score different than a z score?
44
Writing Assignment Degrees of freedom =(df) = (n – 1) Degrees of freedom (df ) = (n 1 - 1) + (n 2 – 1) One sample t-test Two sample t-test First Sample Second Sample Parameter: Population standard deviation 3. What is our formula for degrees of freedom in one sample t-test? 4. We lose one degree of freedom for every parameter we estimate Sample standard deviation Use the word "parameter” when describing a whole population (not just a sample). Usually we don’t know about the whole population so we have guess by using what we know about our sample. A short-hand way to let the reader know it we are describing a population (a parameter) is to use a Greek letter – for example, σ for populations standard deviation, and an s for the sample. In a t-test we never know the population standard deviation (parameter σ) we have to estimate this one parameter (using “s”), so we lose one df our degree of freedom is n-1
45
Writing Assignment 5. What are the three parts to the summary (below) The mean response time for following the sheriff’s new plan was 24 minutes, while the mean response time prior to the new plan was 30 minutes. A t-test was completed and there appears to be no significant difference in the response time following the implementation of the new plan t(9) = -1.71; n.s. Type of test with degrees of freedom Value of observed statistic n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant” Start summary with two means (based on DV) for two levels of the IV Describe type of test (t-test versus anova) with brief overview of results Finish with statistical summary t(4) = 1.96; ns Or if it *were* significant: t(9) = 3.93; p < 0.05
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.