Download presentation
Presentation is loading. Please wait.
Published byAmelia Hunter Modified over 8 years ago
1
Chapter 7: Hypothesis Testing
2
Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed and two-tailed tests of significance Describe and distinguish between Type I and Type II Error Describe statistical power and understand its importance in statistical analysis Describe effect size and explain its relation to statistical significance Understand the place of hypothesis testing in evidence-based practice
3
Introduction Often times, nurses may be interested in studying whether a newly developed intervention for fall prevention is more effective than an existing approach in reducing fall rates. Effect exists when changes in one variable cause another variable to change. Hypothesis Testing is a process of making scientific decisions by determining if the likelihood of observed difference in a variable, Fall rates in our example, is not due to chance.
4
General Steps in Hypothesis Testing 1.Determine hypotheses 2.Choose the level of significance 3.Propose an appropriate test 4.Check assumptions of the chosen test 5.Compute the test statistics 6.Find the critical value 7.Compare the test statistics and critical value to make conclusions
5
Step #1: Determining hypotheses In each study, the two competing hypotheses are formulated and they are the null hypothesis and alternative hypothesis. Null hypothesis is the hypothesis with no effect and denoted as H 0. Alternative hypothesis is the hypothesis that states an effect and denoted as H 1.
6
Step #2: Choosing the level of significance Level of significance is the criterion used for determining statistical significance and is determined by a research in advance. Common choices include 10% (.01), 5% (.05), 1% (.01), or 0.1% (.001) For example, if the researcher is willing to incorrectly reject the null hypothesis 5% of the time then selecting a level of significance of.05 is appropriate.
7
Step #3: Propose appropriate test Number of variables and level of measurement for each variable is critical in determining the most appropriate test for a give research question. For example, let us consider an example null hypothesis: there is no relationship between weight and systolic blood pressure (SBP).
8
Step #3: Propose appropriate test We know we are looking into a relationship between two variables, weight and SBP and both are measured on the ratio level of measurements. This is a perfect hypothesis for using the test statistic, Pearson’s correlation coefficient.
9
Step #4: Checking assumptions of the chosen test Each statistical test assumes a certain set of assumptions to be met. Each test may have unique assumptions but common assumptions across tests include: – Normality – Equal variance across groups – Independence
10
Step #5 and 6: Running the test and finding the critical value As we proceed in this book, we will discuss each test and how to select the necessary options in SPSS.
11
Step #7: Comparing results and making conclusions We examine the probability of effect (p value) as greater or less than , where is the level of significant and where p value is the probability that a statistics is significant. Statistical decision is made based on the comparison of the two: – Reject the null hypothesis in favor of the alternative hypothesis if p value is smaller than – Do not reject the null hypothesis if p value is larger than
12
Step #7: Comparing results and making conclusions Remember that the decision is always made with the null hypothesis – i.e. Reject or Do not reject the null hypothesis Never make the decision with the alternative hypothesis.
13
One-tailed vs. Two-tailed test of significance When hypotheses are written with no direction of difference, the test is said to be two-tailed. – H 0 : There is no difference in fall incidence between the newly developed approach and an existing approach. – H 1 : There is a difference in fall incidence between the newly developed approach and an existing approach.
14
One-tailed vs. Two-tailed test of significance If hypotheses imply direction on top of possible differences, then the test is said to be one-tailed. – H 0 : There is no difference in fall incidence between the newly developed approach and an existing approach. – H 1 : Newly developed approach will be more effective than an existing approach in preventing fall incidence.
15
Type of Errors There are two types of errors in hypotheses testing. Type I error occurs when the null hypothesis is rejected when it should not be. Type II error occurs when the null hypothesis is not rejected when it should be. Null Hypothesis DecisionTrueFalse Do not reject null hypothesisCorrect DecisionType II error (β) Reject null hypothesisType I error (α)Correct Decision
16
Type of Errors A researcher has a control over Type I error and can determine it before a study is conducted. However, a researcher can control Type II error. Type I and Type II errors are inversely related. Increasing sample size is one way to control both errors simultaneously.
17
Hypothesis Testing: An Example Let us assume that we suspect that there is a difference in the effectiveness of preventing falls between a newly developed approach and an existing approach. It has been known that the average number of falls is 13 with our existing fall prevention program and a standard deviation is know as 4. A sample of 40 participants were taken.
18
Hypothesis Testing: An Example Step 1: Determine hypothesis – H 0 : There is no difference in fall incidence between the newly developed approach and an existing approach. – H 1 : There is a difference in fall incidence between the newly developed approach and an existing approach. Step 2: Choosing the level of significance – This example will use an of.05
19
Hypothesis Testing: An Example Step 3: Proposing an appropriate test. – One sample z- test as a known mean will be compared to the calculated mean from a sample of 40 and a stan dard deviation is known. Step 4: Checking assumptions of the chosen test – Sample size should be large enough and the standard deviation is known. – We meet these two assumptions
20
Hypothesis Testing: An Example Step 5: Run proposed test Step 6: Find the critical values – From the z table, our critical values are – 1.96 and +1.9 6 with of.05.
21
Hypothesis Testing: An Example Step 7: Compare results and make conclusions
22
Statistical Power Statistical power is the probability of rejecting the null hypothesis when it is false or of correctly saying there is an effect when it exists. – i.e. Statistical power is 1 - β Factors influencing power – Level of significance – Effect size – Sample size – Type of statistical Test
23
Statistical Power Statistical power ranges between 0 and 1 It tells us how often we can correctly reject a null hypothesis and say there is a true effect A minimum of 80% power is suggested to conduct a study at acceptable level.
24
Type of Power analysis Priori power analysis – This is completed before the study is conducted and is a guide for estimating the sample size required to reject the null hypothesis when we should Post hoc power analysis – This is conducted after the study is completed and tells you what level of power the study was conducted at with the obtained sample size along with other factors
25
Effect Size Even if a study report a statistically significant result, one must be cautious in interpreting the results. One study may obtain a statistically significant results while the other study may not. Statistical significance does not tell us much about how much of an effect was present and how important the size of effect is in practice.
26
Effect Size Effect size is the measure of the strength or magnitude of an effect, difference or relationship, between variables Types of effect size – Cohen’s d – Pearson’s r coefficient – Ω² – Others
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.