Presentation is loading. Please wait.

Presentation is loading. Please wait.

SUMMARY Hypothesis testing. Self-engagement assesment.

Similar presentations


Presentation on theme: "SUMMARY Hypothesis testing. Self-engagement assesment."— Presentation transcript:

1 SUMMARY Hypothesis testing

2 Self-engagement assesment

3 Null hypothesis no song song Null hypothesis: I assume that populations without and with song are same. At the beginning of our calculations, we assume the null hypothesis is true.

4 Hypothesis testing song 8.2 7.8 Because of such a low probability, we interpret 8.2 as a significant increase over 7.8 caused by undeniable pedagogical qualities of the 'Hypothesis testing song'.

5 Four steps of hypothesis testing 1. Formulate the null and the alternative (this includes one- or two-directional test) hypothesis. 2. Select the significance level α – a criterion upon which we decide that the claim being tested is true or not. --- COLLECT DATA --- 3. Compute the p-value. The p-value is the probability that the data would be at least as extreme as those observed, if the null hypothesis were true. 4. Compare the p-value to the α-level. If p ≤ α, the observed effect is statistically significant, the null is rejected, and the alternative hypothesis is valid.

6 One-tailed and two-tailed one-tailed (directional) test two-tailed (non-directional) test Z-critical value, what is it?

7 NEW STUFF

8 Decision errors Hypothesis testing is prone to misinterpretations. It's possible that students selected for the musical lesson were already more engaged. And we wrongly attributed high engagement score to the song. Of course, it's unlikely to just simply select a sample with the mean engagement of 8.2. The probability of doing so is 0.0022, pretty low. Thus we concluded it is unlikely. But it's still possible to have randomly obtained a sample with such a mean mean.

9 Four possible things can happen Decision Reject H 0 Retain H 0 State of the world H 0 true 13 H 0 false 24 In which cases we made a wrong decision?

10 Four possible things can happen Decision Reject H 0 Retain H 0 State of the world H 0 true 1 H 0 false 4 In which cases we made a wrong decision?

11 Four possible things can happen Decision Reject H 0 Retain H 0 State of the world H 0 true Type I error H 0 false Type II error

12 Type I error When there really is no difference between the populations, random sampling can lead to a difference large enough to be statistically significant. You reject the null, but you shouldn't. False positive – the person doesn't have the disease, but the test says it does

13 Type II error When there really is a difference between the populations, random sampling can lead to a difference small enough to be not statistically significant. You do not reject the null, but you should. False negative - the person has the disease but the test doesn't pick it up Type I and II errors are theoretical concepts. When you analyze your data, you don't know if the populations are identical. You only know data in your particular samples. You will never know whether you made one of these errors.

14 The trade-off If you set α level to a very low value, you will make few Type I/Type II errors. But by reducing α level you also increase the chance of Type II error.

15 Clinical trial for a novel drug Drug that should treat a disease for which there exists no therapy If the result is statistically significant, drug will me marketed. If the result is not statistically significant, work on the drug will cease. Type I error: treat future patients with ineffective drug Type II error: cancel the development of a functional drug for a condition that is currently not treatable. Which error is worse? I would say Type II error. To reduce its risk, it makes sense to set α = 0.10 or even higher. Harvey Motulsky, Intuitive Biostatistics

16 Clinical trial for a me-too drug Drug that should treat a disease for which there already exists another therapy Again, if the result is statistically significant, drug will me marketed. Again, if the result is not statistically significant, work on the drug will cease. Type I error: treat future patients with ineffective drug Type II error: cancel the development of a functional drug for a condition that can be treated adequately with existing drugs. Thinking scientifically (not commercially) I would minimize the risk of Type I error (set α to a very low value). Harvey Motulsky, Intuitive Biostatistics

17 Engagement example, n = 30 Z = 0.79 Z = 1.87 www.udacity.com – Statistics

18 Engagement example, n = 30 Decision Reject H 0 Retain H 0 State of the world H 0 true H 0 false Which of these four quadrants represent the result of our hypothesis test? www.udacity.com – Statistics

19 Engagement example, n = 30 Decision Reject H 0 Retain H 0 State of the world H 0 true X H 0 false Which of these four quadrants represent the result of our hypothesis test?

20 Engagement example, n = 50 Z = 1.02 Z = 2.42 www.udacity.com – Statistics

21 Engagement example, n = 50 Decision Reject H 0 Retain H 0 State of the world H 0 true H 0 false Which of these four quadrants represent the result of our hypothesis test? www.udacity.com – Statistics

22 Engagement example, n = 50 Decision Reject H 0 Retain H 0 State of the world H 0 true X H 0 false Which of these four quadrants represent the result of our hypothesis test? www.udacity.com – Statistics

23 population of students that did not attend the musical lesson population of students that did attend the musical lesson parameters are known sample statistic is known

24 Test statistic test statistic Z-test

25 New situation An average engagement score in the population of 100 students is 7.5. A sample of 50 students was exposed to the musical lesson. Their engagement score became 7.72 with the s.d. of 0.6. DECISION: Does a musical performance lead to the change in the students' engagement? Answer YES/NO. Setup a hypothesis test, please.

26 Hypothesis test

27 Formulate the test statistic but this is unknown! population of students that did attend the musical lesson sample population of students that did not attend the musical lesson known unknown

28 t-statistic one sample t-test jednovýběrový t-test

29 t-distribution

30 One-sample t-test

31 Quiz

32 Z-test vs. t-test

33 Typical example of one-sample t-test

34 Dependent t-test for paired samples Two samples are dependent when the same subject takes the test twice. paired t-test (párový t-test) This is a two-sample test, as we work with two samples. Examples of such situations: Each subject is assigned to two different conditions (e.g., use QWERTZ keyboard and AZERTY keyboard and compare the error rate). Pre-test … post-test. Growth over time.

35 Example student 1 student 2 student n no song song

36 Do the hypothesis test

37

38 Dependent samples e.g., give one person two different conditions to see how he/she reacts. Maybe one control and one treatment or two types of treatments. Advantages we can use fewer subjects cost-effective less time-consuming Disadvantages carry-over effects order may influence results

39 Independent samples

40 This is true only if two samples are independent!

41 Independent samples

42 An example

43

44

45 Summary of t-tests two-sample tests

46 F-test of equality of variances source: Wikipedia

47 t-test in R t.test() Let's have a look into R manual: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/t.test.html See my website for link to pdf explaining various t-test in R (with examples).

48 Assumptions 1. Unpaired t-tests are highly sensitive to the violation of the independence assumption. 2. Populations samples come from should be approximately normal. This is less important for large sample sizes. What to do if these assumptions are not fullfilled 1. Use paired t-test 2. Let's see further

49 Check for normality – histogram

50 Check for normality – QQ-plot qqnorm(rivers) qqline(rivers)

51 Check for normality – tests The graphical methods for checking data normality still leave much to your own interpretation. If you show any of these plots to ten different statisticians, you can get ten different answers. H 0 : Data follow a normal distribution. Shapiro-Wilk test shapiro.test(rivers): Shapiro-Wilk normality test data: rivers W = 0.6666, p-value < 2.2e-16

52 Nonparametric statistics Small samples from considerably non-normal distributions. non-parametric tests No assumption about the shape of the distribution. No assumption about the parameters of the distribution (thus they are called non-parametric). Simple to do, however their theory is extremely complicated. Of course, we won't cover it at all. However, they are less accurate than their parametric counterparts. So if your data fullfill the assumptions about normality, use paramatric tests (t-test, F-test).

53 Nonparametric tests If the normality assumption of the t-test is violated, and the sample sizes are too small, then its nonparametric alternative should be used. The nonparametric alternative of t-test is Wilcoxon test. wilcox.test() http://stat.ethz.ch/R-manual/R-patched/library/stats/html/wilcox.test.html


Download ppt "SUMMARY Hypothesis testing. Self-engagement assesment."

Similar presentations


Ads by Google