Presentation is loading. Please wait.

Presentation is loading. Please wait.

PTP 560 Research Methods Week 9 Thomas Ruediger, PT.

Similar presentations


Presentation on theme: "PTP 560 Research Methods Week 9 Thomas Ruediger, PT."— Presentation transcript:

1 PTP 560 Research Methods Week 9 Thomas Ruediger, PT

2

3

4

5

6

7

8

9

10

11

12

13

14 Calculate Sn, Sp, LRs

15 Confidence Intervals with small samples  What is small sample size?  Less than 30 (one of those special numbers in stats)  Sampling distribution tends to spread out  Standard normal curve not adequate  Use the t-distributions  Theoretical sampling distributions  Flatter peak, wider at the tails  Approaches normal curve as sample size increases  Use values of t instead of z  Described by degrees of freedom (n-1 for confidence intervals)

16 Hypothesis testing Are the differences – Representative of “real” effects? – Just by chance? Null hypothesis ( H O ) – Means are not different – Stated in terms of population parameter – μ A =μ B Alternative hypothesis (H 1 ) – Difference too large to be by chance – μ A ≠μ B – May be directional or non-directional

17 Truth Decision H o is True Do Not Reject H o α Correct β Reject H o H o is False Correct Type I error Type II error

18 Type I Error Significance Level (Alpha level, α level) Your choice of how much risk you are willing to take of saying there is a difference when there really is no difference Set this before the study – Conventionally is 0.05 This is arbitrary, but almost always what we choose – Choose the level based on the Type I error concern

19 Type I Error Probability Values (Evaluated after the study) – Probability of finding this big a difference by chance p =.07 of this big a difference by chance – You are not stating the probability of the inverse p =.93 that it is real difference is not appropriate – Compare p-value to alpha level if greater – Compare your p-value (calculated after the study) with your α level (set before the study) – If the p-value is less than α, reject the null – If the p-value is greater than α, fail to reject the null

20 Type II Error Statistical Power Beta (β) – Probability of failing to reject a false H O(null hypotheses) – Β of 0.20 is 20% chance we will make a Type II error Statistical Power – Complement of β (not compliment) – In this example 0.80 (1.00 – 0.20 = 0.80) – 80% probability of correctly rejecting the null Before: a priori - power used to determine sample size After: post hoc – power reported if H O not rejected “If there was a difference, could we have found it?”

21 Determinants of Statistical Power Significance Criterion – As α increases, power increases (As α increases from 0.05 to 0.10, power increases) Variance – As variance decreases, power increases Sample size – As sample size increases, power increases Effect size (difference b/w the group means) – As effect size increases, power increases

22 z- z - score represents the distance between: – A sample score and – Sample mean – Divided by the standard deviation You will see this in osteoporosis scores (+2 for z- score is 2 SD away from a healthily woman mean) z - ratio represents the distance between: – A sample mean and – Population mean – Divided by the standard error of the mean

23 Critical Region  That portion of the curve above and below z  If calculated z > critical z, reject H O  One or two tailed test?  Non directional hypothesis– two tailed  z of 2.00 encompasses 95.44 % (non-critical)  4.66% is the critical area  z of 1.96 encompass 95%,  Critical region is 5%, 2.5% in each tail of a non-directional test  Directional hypothesis– one tailed  z of 1.645 encompasses 95%  Critical region is 5%, all in one tail of a directional test, while NON-direction will be 2.5%  Practically, you are disregarding everything in the other tail  Table A.1 back of P & W

24 Figure A: Intervention 1 is different than Intervention 2 Figure B: Intervention 1 is less effective than Intervention 2

25 Parametric Statistics Used to estimate population parameters Based on assumptions – Randomly drawn from a normally distributed population – Variances in the samples equal (at least roughly) – Interval or ratio scale Classically, if assumptions violated, use non- parametric tests Many view parametric stats as Robust enough to withstand even major violation

26 t-test  Examines two means  Two groups  Two conditions/two performances  Statistical significance based on  Difference in the means  Between the groups  The effect size  Variance  Within the groups  How variable are the scores Fig 19.1

27 t-test  Based on a ratio  Difference between group means/Variability within groups  Difference between means  Treatment effect and error variance  One mean- second mean and variability between the groups. In both the numerator and denominator of t-test ratio, so holds it to zero.  Variability within groups  Error variance alone  Equal and unequal variances affect t-ratio  SPSS and most other packages automatically test for this. Where?  Ratio can be written: Treatment effect and error variance/Error variance NOTE: Error variance  Not mistakes  Is anything that is not due to the independent variable

28 t-test  If the null is true  Ratio reduces to: Error/Error  The bigger the difference - The bigger the ratio How does the ratio get bigger?  This ratio is compared to the critical  Determines significance  Is the ratio significant?  Based on critical value (but now t instead of z)  Entering arguments (Table A.2)  Alpha level (almost always 0.05  Degrees of freedom ( one or a few less than n )  CI can be constructed for where the true mean difference lies

29 t-test  Independent t-test  Usually random assignment  Can be convenience assignment  No inherent relationship between the groups  Degrees of freedom = total sample size – 2  Paired t-test  An inherent relationship between the groups  Self (repeated measure test)  Twins  Difference scores for each pair compared  Degrees of freedom = number of paired scores – 1  Find this in P & W or in an SPSS output table – don’t calculate for this class!

30 ANOVA  Examines three (or more) means  Three (or more) groups  Three (or more) conditions/ three (or more) performances  Statistical significance based on  Difference in the means  Between the groups  The effect size  Variance  Within the groups  How variable are the scores  This should sound familiar

31 ANOVA  Based on ratio  Treatment effect and error variance/Error variance  For ANOVA it is the F-ratio  Derived from the Sum of Squares (SS)  Larger the SS the larger the ______________?  Calculate SS (each score minus sample mean, square each result, sum them)  Then determine the Mean Square (MS)  MS b = SS b /df b (df b = one less than the number of groups)  MS e = SS e /df e (df e = total N – number of groups)  F statistic is the ratio = MS b /Ms e  Ratio of the between groups variance to error variance  Find this in P & W or in an SPSS output table – don’t calculate for this class!


Download ppt "PTP 560 Research Methods Week 9 Thomas Ruediger, PT."

Similar presentations


Ads by Google