Presentation is loading. Please wait.

Presentation is loading. Please wait.

Stage Screen Row B 13 121110 20191817 14 13 121110 19181716 1514 Gallagher Theater 16 65879 Row R 6 58 7 9 Lecturer’s desk Row A Row B Row C 4 3 2 43.

Similar presentations


Presentation on theme: "Stage Screen Row B 13 121110 20191817 14 13 121110 19181716 1514 Gallagher Theater 16 65879 Row R 6 58 7 9 Lecturer’s desk Row A Row B Row C 4 3 2 43."— Presentation transcript:

1

2 Stage Screen Row B 13 121110 20191817 14 13 121110 19181716 1514 Gallagher Theater 16 65879 Row R 6 58 7 9 Lecturer’s desk Row A Row B Row C 4 3 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 43 2 43 2 1 1 3 21 3 2 43 21 Row A 17 16 15 Row A Row C 131211 10 1514 6 58 7 9 Row D 13121110 1514 16 6 58 7 9 20191817 Row D Row E 131211 10 1514 6 58 7 9 19181716 Row E Row F 13121110 1514 16 6 58 7 9 20191817 Row F Row G 13121110 1514 6 58 7 9 19181716 Row G Row H 13121110 1514 16 6 58 7 9 20191817 Row H Row I 13121110 1514 6 58 7 9 19181716 Row I Row J 13121110 1514 16 6 58 7 9 20191817 Row J Row K 13121110 1514 6 58 7 9 19181716 Row K Row L 13121110 1514 16 6 58 7 9 20 191817 Row L Row M 13121110 1514 6 58 7 9 19181716 Row M Row N 13121110 1514 16 6 5879 20191817 Row N Row O 13121110 1514 6 58 7 9 19181716 Row O Row P 13121110 1514 16 6 5879 20191817 Row P Row Q 13121110 6 5879 161514 Row Q 4 4 Row R 10 879 Row S Row B Row C Row D Row E Row F Row G Row H Row I Row J Row K Row L Row M Row N Row O Row P Row Q 26Left-Handed Desks A14, B16, B20, C19, D16, D20, E15, E19, F16, F20, G19, H16, H20, I15, J16, J20, K19, L16, L20, M15, M19, N16, P20, Q13, Q16, S4 5 Broken Desks B9, E12, G9, H3, M17 Need Labels B5, E1, I16, J17, K8, M4, O1, P16 Left handed

3 Stage Screen 2213 121110 Row A Row B Row C Row D Row E Row F Row G Row H Row J Row K Row L Row M 17 Row C Row D Row E Projection Booth 65 4 table Row C Row D Row E 30 27 26252423 282726 2524 23 3127262524 23 R/L handed broken desk 16 1514 13 12 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 Social Sciences 100 Row N Row O Row P Row Q Row R 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 2213 121110 2019181716151421 8 7 9 65 4 8 7 9 3 2 6 5 48793 2 1 6 5 48793 2 1 Row F Row G Row H Row J Row K Row L Row M Row N Row O Row P Row Q Row R 6 5 48793 2 1 6 5 48793 2 1 Row I 2213 121110 2019181716151421 Row I 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 6 5 48793 2 1 Lecturer’s desk 6 5 48793 2 1 262524 23 302928 Row F Row G Row H Row J Row K Row L Row M Row N Row O Row P Row Q Row R Row I 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 3127262524 23 302928 Row B 2928 27

4 MGMT 276: Statistical Inference in Management Fall, 2014 Green sheets

5 Reminder Talking or whispering to your neighbor can be a problem for us – please consider writing short notes. A note on doodling

6

7 Before our next exam (November 6 th ) Lind (10 – 12) Chapter 10: One sample Tests of Hypothesis Chapter 11: Two sample Tests of Hypothesis Chapter 12: Analysis of Variance Plous (2, 3, & 4) Chapter 2: Cognitive Dissonance Chapter 3: Memory and Hindsight Bias Chapter 4: Context Dependence Schedule of readings

8 On class website: Please print and complete homework worksheets Assignment 14: Hypothesis Testing using t-tests Due: Thursday, October 30th Assignments 15 & 16: Hypothesis Testing using t-tests Due: Tuesday, November 4th Homework

9 By the end of lecture today 10/30/14 Use this as your study guide Logic of hypothesis testing Steps for hypothesis testing Levels of significance (Levels of alpha) what does p < 0.05 mean? what does p < 0.01 mean? Hypothesis testing with t-scores (one-sample) Hypothesis testing with t-scores (two independent samples) Constructing brief, complete summary statements

10 .. A note on z scores, and t score: Difference between means Variability of curve(s) Difference between means Numerator is always distance between means (how far away the distributions are or “effect size”) Denominator is always measure of variability (how wide or much overlap there is between distributions) Variability of curve(s) (within group variability) Review

11 . A note on variability versus effect size Difference between means Variability of curve(s) Variability of curve(s) (within group variability) Difference between means Review

12 .. Difference between means Variability of curve(s) Variability of curve(s) (within group variability) Difference between means A note on variability versus effect size Review

13 . Effect size is considered relative to variability of distributions 1. Larger variance harder to find significant difference Treatment Effect Treatment Effect 2. Smaller variance easier to find significant difference x x

14 . Effect size is considered relative to variability of distributions Treatment Effect Treatment Effect x x Variability of curve(s) (within group variability) Difference between means

15 Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule: find “critical z” score Alpha level? ( α =.05 or.01)? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem Population versus sample standard deviation How is a t score different than a z score? One versus two-tailed test

16 Comparing z score distributions with t-score distributions Similarities include: Using bell-shaped distributions to make confidence interval estimations and decisions in hypothesis testing Use table to find areas under the curve (different table, though – areas often differ from z scores) z-scores t-scores Summary of 2 main differences: We are now estimating standard deviation from the sample (We don’t know population standard deviation) We have to deal with degrees of freedom

17 . Interpreting t-table Technically, we have a different t-distribution for each sample size This t-table summarizes the most useful values for several distributions n = 17 n = 5 This t-table presents useful values for distributions (organized by degrees of freedom) 1.962.58 1.64 Remember these useful values for z-scores? We use degrees of freedom (df) to approximate sample size Each curve is based on its own degrees of freedom (df) - based on sample size, and its own table tying together t-scores with area under the curve

18 Comparison of z and t For very small samples, t-values differ substantially from the normal.For very small samples, t-values differ substantially from the normal. As degrees of freedom increase, the t-values approach the normal z-values.As degrees of freedom increase, the t-values approach the normal z-values. For example, for = 31, the degrees of freedom are:For example, for n = 31, the degrees of freedom are: What would the t-value be for a 90% confidence interval? n - 1 = 31 – 1 = 30 df

19 Degrees of Freedom Degrees of Freedom ( d.f. ) is a parameter based on the sample size that is used to determine the value of the t statistic. Degrees of freedom tell how many observations are used to calculate s, less the number of intermediate estimates used in the calculation.

20 Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) df

21 Area between two scores Area beyond two scores (out in tails) Area beyond two scores (out in tails) Area in each tail (out in tails) Area in each tail (out in tails) Notice with large sample size it is same values as z-score. 1.962.58 1.64 Remember these useful values for z-scores? df

22 A quick re-visit with the law of large numbers Relationship between increased sample size decreased variability smaller “critical values” As n goes up variability goes down

23 Law of large numbers: As the number of measurements increases the data becomes more stable and a better approximation of the true signal (e.g. mean) As the number of observations (n) increases or the number of times the experiment is performed, the signal will become more clear (static cancels out) http://www.youtube.com/watch?v=ne6tB2KiZuk With only a few people any little error is noticed (becomes exaggerated when we look at whole group) With many people any little error is corrected (becomes minimized when we look at whole group)

24 Crowd sourcing for predicting future events Wisdom of Crowds Francis Galton (1906) Revisit: Law of large numbers Deviation scores / Error term - how far away the individual scores (guesses) are from the true score Mean (The over-estimates and under-estimates balance each other out) http://www.npr.org/blogs/parallels/2014/04/02/297839429/-so-you-think-youre-smarter-than-a-cia-agent

25 Comparing z score distributions with t-score distributions Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Critical t (just like critical z) separates common from rare scores Critical t used to define both common scores “confidence interval” and rare scores “region of rejection”

26 Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample

27 Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample Please notice: as sample sizes get smaller, the tails get thicker. As sample sizes get bigger tails get thinner and look more like the z-distribution

28 Comparing z score distributions with t-score distributions 2) The shape of the sampling distribution is very sensitive to small sample sizes (it actually changes shape depending on n) Differences include: 1)We use t-distribution when we don’t know standard deviation of population, and have to estimate it from our sample 3) Because the shape changes, the relationship between the scores and proportions under the curve change (So, we would have a different table for all the different possible n’s but just the important ones are summarized in our t-table) Please note: Once sample sizes get big enough the t distribution (curve) starts to look exactly like the z distribution (curve) scores

29 mean + z σ = 30 ± (1.96)(2) mean + z σ = 30 ± (2.58)(2) 26.08 < µ < 33.92 24.84 < µ < 35.16 95% 99%

30 Melvin Mark Melvin Difference not due sample size because both samples same size Difference not due population variability because same population Yes! Difference is due to sloppiness and random error in Melvin’s sample Melvin

31 Ho: µ = 5 Ha: µ ≠ 5 Bags of potatoes from that plant are not different from other plants Bags of potatoes from that plant are different from other plants Two tailed test (α =.05) 1.96 6 – 5.25 = 4.0 1 16 √ =.25 4.0 1.96 -1.96 1 4 = z- score : because we know the population standard deviation

32 Yes These three will always match Probability of Type I error is always equal to alpha.05 Because the observed z (4.0 ) is bigger than critical z (1.96) 1.64 No Because observed z (4.0) is still bigger than critical z (1.64) 2.58 there is a difference No Because observed z (4.0) is still bigger than critical z(2.58) there is no difference there is not there is 1.96 2.58

33 Two tailed test (α =.05) Critical t(15) = 2.131 89 - 85 6 16 √ 2.667 t- score : because we don’t know the population standard deviation n – 1 =16 – 1 = 15 2.13 -2.13

34 Yes These three will always match Probability of Type I error is always equal to alpha.05 Because the observed z (2.67) is bigger than critical z (2.13) 1.753 No Because observed t (2.67) is still bigger than critical t (1.753) 2.947 consultant did improve morale Yes Because observed t (2.67) is not bigger than critical t(2.947) consultant did not improve morale she did not she did 2.131 2.947 No These three will always match

35 The average weight of bags of potatoes from this particular plant is 6 pounds, while the average weight for population is 5 pounds. A z-test was completed and this difference was found to be statistically significant. We should fix the plant. (z = 4.0; p<0.05) Start summary with two means (based on DV) for two levels of the IV Describe type of test (z-test versus t-test) with brief overview of results Finish with statistical summary z = 4.0; p < 0.05 Or if it *were not* significant: z = 1.2 ; n.s. Value of observed statistic n.s. = “not significant” p<0.05 = “significant”

36 The average job-satisfaction score was 89 for the employees who went On the retreat, while the average score for population is 85. A t-test was completed and this difference was found to be statistically significant. We should hire the consultant. (t(15) = 2.67; p<0.05) Start summary with two means (based on DV) for two levels of the IV Describe type of test (z-test versus t-test) with brief overview of results Finish with statistical summary t(15) = 2.67; p < 0.05 Or if it *were not* significant: t(15) = 1.07; n.s. df Value of observed statistic n.s. = “not significant” p<0.05 = “significant”

37 .. A note on z scores, and t score: Difference between means Variability of curve(s) Difference between means Numerator is always distance between means (how far away the distributions are) Denominator is always measure of variability (how wide or much overlap there is between distributions) Variability of curve(s)

38 Five steps to hypothesis testing Step 1: Identify the research problem (hypothesis) Describe the null and alternative hypotheses Step 2: Decision rule Alpha level? ( α =.05 or.01)? Step 3: Calculations Step 4: Make decision whether or not to reject null hypothesis If observed z (or t) is bigger then critical z (or t) then reject null Step 5: Conclusion - tie findings back in to research problem Critical statistic (e.g. z or t) value? How is a single sample t-test different than two sample t-test? Single sample standard deviation versus average standard deviation for two samples How is a single sample t-test most similar to the two sample t-test? Single sample has one “n” while two samples will have an “n” for each sample

39 Independent samples t-test Donald is a consultant and leads training sessions. As part of his training sessions, he provides the students with breakfast. He has noticed that when he provides a full breakfast people seem to learn better than when he provides just a small meal (donuts and muffins). So, he put his hunch to the test. He had two classes, both with three people enrolled. The one group was given a big meal and the other group was given only a small meal. He then compared their test performance at the end of the day. Please test with an alpha =.05 Big Meal 22 25 Small meal 19 23 21 Mean= 24 Mean= 21 t = x 1 – x 2 variability t = 24 – 21 variability Got to figure this part out: We want to average from 2 samples - Call it “pooled” Are the two means significantly different from each other, or is the difference just due to chance?

40 Hypothesis testing Step 1: Identify the research problem Step 2: Describe the null and alternative hypotheses Did the size of the meal affect the learning / test scores? Step 3: Decision rule α =.05 Two tailed test Degrees of freedom total (df total ) = (n 1 - 1) + (n 2 – 1) = (3 - 1) + (3 – 1) = 4 n 1 = 3; n 2 = 3 Critical t (4) = 2.776 Step 4: Calculate observed t score Notice: Two different ways to think about it

41 α =.05 (df) = 4 Critical t (4) = 2.776 two tail test

42 3 4 Mean= 24 Squared Deviation 4 0 Σ = 8 Big Meal 22 25 Small meal 19 23 21 Big Meal Deviation From mean -2 1 Squared deviation 4 1 Mean= 21 Small Meal Deviation From mean -2 2 0 Σ = 6 = 3.5 S 2 pooled = (n 1 – 1) s 1 2 + (n 2 – 1) s 2 2 n 1 + n 2 - 2 S 2 pooled = (3 – 1) (3) + (3 – 1) (4) 3 1 + 3 2 - 2 6 2 1 8 2 1 2 2 Notice: s 2 = 3.0 Notice: s 2 = 4.0 Notice: Simple Average = 3.5

43 Mean= 24 Squared Deviation 4 0 Σ = 8 Participant 1 2 3 Big Meal 22 25 Small meal 19 23 21 Big Meal Deviation From mean -2 1 Squared deviation 4 1 Mean= 21 Small Meal Deviation From mean -2 2 0 Σ = 6 = 24 – 21 1.5275 = 1.964 S 2 p = 3.5 24 - 21 3.5 33 Observed t 1.964 is not larger than 2.776 so, we do not reject the null hypothesis t(4) = 1.964; n.s. Observed t = 1.964 Critical t = 2.776 Conclusion: There appears to be no difference between the groups

44 How to report the findings for a t-test One paragraph summary of this study. Describe the IV & DV. Present the two means, which type of test was conducted, and the statistical results. Observed t = 1.964 df = 4 Mean of big meal was 24 Mean of small meal was 21 We compared test scores for large and small meals. The mean test scores for the big meal was 24, and was 21 for the small meal. A t-test was calculated and there appears to be no significant difference in test scores between the two types of meals t(4) = 1.964; n.s. Start summary with two means (based on DV) for two levels of the IV Describe type of test (t-test versus anova) with brief overview of results Finish with statistical summary t(4) = 1.96; ns Or if it *were* significant: t(9) = 3.93; p < 0.05 Type of test with degrees of freedom Value of observed statistic n.s. = “not significant” p<0.05 = “significant” n.s. = “not significant” p<0.05 = “significant”

45 We compared test scores for large and small meals. The mean test scores for the big meal was 24, and was 21 for the small meal. A t-test was calculated and there appears to be no significant difference in test scores between the two types of meals, t(4) = 1.964; n.s. Start summary with two means (based on DV) for two levels of the IV Describe type of test (t-test versus anova) with brief overview of results Finish with statistical summary t(4) = 1.96; ns Or if it *were* significant: t(9) = 3.93; p < 0.05 Type of test with degrees of freedom Value of observed statistic n.s. = “not significant” p<0.05 = “significant”

46


Download ppt "Stage Screen Row B 13 121110 20191817 14 13 121110 19181716 1514 Gallagher Theater 16 65879 Row R 6 58 7 9 Lecturer’s desk Row A Row B Row C 4 3 2 43."

Similar presentations


Ads by Google