Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hypothesis Testing.

Similar presentations


Presentation on theme: "Hypothesis Testing."— Presentation transcript:

1 Hypothesis Testing

2 Samples and Sampling We use small samples to infer about large populations Good samples are: Representative Selected at random Truly Independent

3 Sampling Distributions
One of the most crucial ideas in stats 1. Take a sample 2. Calculate the mean 3. Repeat You will get a sampling distribution of the mean

4 Central Limit Theorem The sampling distribution of means from random samples of n observations approaches the normal distribution regardless of the shape of the parent population. See CLT example

5 Central Limit Theorem: Example

6 So what? From the CLT, we can establish the accuracy of the estimate of the mean from one sample. Standard Error of the mean (SE) = SD/√n Confidence Intervals (CI): .68CI=Mean ± SE .95CI=Mean ± 1.96(SE) .99CI=Mean ± 2.576(SE)

7 Now we can talk about hypotheses
Scientific vs. Statistical Hypotheses Ho : the null hypothesis (the one you want to nullify) H1 : the alternative hypothesis Example: Ho : Meancontrol group = Meanexperimental group H1 : Meancontrol group < Meanexperimental group

8 True state of null hypothesis
How do you know your sample or group is really different from another sample, group or population? When comparing two samples (or a sample to an assumed population), two errors are possible. powerapplet1.html Statistical Decision True state of null hypothesis Ho True Ho False Reject Ho Type I error (α) Correct Do not reject Ho Type II error (β)

9 True state of null hypothesis
Power Two of the most important and most neglected concepts are power and effect size. Power (1- β) : probability of correctly rejecting a false null hypothesis. Thus, the previous table could be expressed in terms of probability: powerapplet1.html Statistical Decision True state of null hypothesis Ho True Ho False Reject Ho Type I error (α) = .05 Correct = .80 (Power) Do not reject Ho Correct = .95 Type II error (β) = .20

10 Effect Size Which result is a larger effect?
Significant difference between groups (p<.05) Significant difference between groups (p<.01)

11 Effect Size Two roads to a significant result:
Small effect but a large sample Large effect Effect size statistics provide estimates that are independent of idiosyncrasies of any given sample. They typically translate mean differences into standard deviation units. i.e. Cohen’s d = (Mean1-Mean2)/SD For this stat, small=.2, medium=.5, and large>.8 (See Cohen, J. (1992). A Power Primer. Psychological Bulletin, 112, powerapplet1.html

12 How does it all relate? There are 4 variables involved with statistical inference: Sample size (N), significance criterion (α), effect size, and statistical power. You can get the value of any one of these with the values of the other 3.

13 Power analysis in proposals
You can use power to determine the N needed for your study. If you estimate your expected effect size (i.e. mean difference of ½ SD = ES of .5), know your alpha (probably .05), and desired power to detect differences (typically >.80), you can get the sample size necessary to detect that difference. Tables in Cohen (1988) or on-line.

14 What does it all mean? Low power (< .80) gives under-estimates of actual effects. That is, increased Type II error – failure to reject a false Ho. Small effect sizes, regardless of p level, are just small effects.

15 Recommended Reading and On- Line resources
Cohen, J. (1992). A Power Primer. Psychological Bulletin, 112, Rosnow, R.L., & Rosenthal, R. (2003). Effect sizes for experimenting psychologists. Canadian Journal of Experimental Psychology, 57(3),


Download ppt "Hypothesis Testing."

Similar presentations


Ads by Google