PTP 560 Research Methods Week 9 Thomas Ruediger, PT.

Slides:



Advertisements
Similar presentations
Hypothesis: It is an assumption of population parameter ( mean, proportion, variance) There are two types of hypothesis : 1) Simple hypothesis :A statistical.
Advertisements

Statistical Decision Making
Chapter Seventeen HYPOTHESIS TESTING
Fundamentals of Hypothesis Testing. Identify the Population Assume the population mean TV sets is 3. (Null Hypothesis) REJECT Compute the Sample Mean.
Using Statistics in Research Psych 231: Research Methods in Psychology.
ANOVA Analysis of Variance: Why do these Sample Means differ as much as they do (Variance)? Standard Error of the Mean (“variance” of means) depends upon.
Analysis of Variance: Inferences about 2 or More Means
PSY 307 – Statistics for the Behavioral Sciences
Lecture 9: One Way ANOVA Between Subjects
Independent Sample T-test Often used with experimental designs N subjects are randomly assigned to two groups (Control * Treatment). After treatment, the.
Inferences About Process Quality
Today Concepts underlying inferential statistics
Independent Sample T-test Classical design used in psychology/medicine N subjects are randomly assigned to two groups (Control * Treatment). After treatment,
Using Statistics in Research Psych 231: Research Methods in Psychology.
Richard M. Jacobs, OSA, Ph.D.
Inferential Statistics
Chapter Ten Introduction to Hypothesis Testing. Copyright © Houghton Mifflin Company. All rights reserved.Chapter New Statistical Notation The.
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
AM Recitation 2/10/11.
Hypothesis Testing:.
Overview of Statistical Hypothesis Testing: The z-Test
Overview Definition Hypothesis
Inferential Statistics & Test of Significance
Comparing Means From Two Sets of Data
Copyright © Cengage Learning. All rights reserved. 13 Linear Correlation and Regression Analysis.
Fall 2013 Lecture 5: Chapter 5 Statistical Analysis of Data …yes the “S” word.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
One-Way Analysis of Variance Comparing means of more than 2 independent samples 1.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Inferential Statistics.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 22 Using Inferential Statistics to Test Hypotheses.
Chapter 11 HYPOTHESIS TESTING USING THE ONE-WAY ANALYSIS OF VARIANCE.
Inferential Statistics 2 Maarten Buis January 11, 2006.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Basic concept Measures of central tendency Measures of central tendency Measures of dispersion & variability.
Statistics (cont.) Psych 231: Research Methods in Psychology.
Lecture 5: Chapter 5: Part I: pg Statistical Analysis of Data …yes the “S” word.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests.
Jeopardy Hypothesis Testing t-test Basics t for Indep. Samples Related Samples t— Didn’t cover— Skip for now Ancient History $100 $200$200 $300 $500 $400.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Review Hints for Final. Descriptive Statistics: Describing a data set.
Interval Estimation and Hypothesis Testing Prepared by Vera Tabakova, East Carolina University.
1 Chapter 8 Introduction to Hypothesis Testing. 2 Name of the game… Hypothesis testing Statistical method that uses sample data to evaluate a hypothesis.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter Twelve The Two-Sample t-Test. Copyright © Houghton Mifflin Company. All rights reserved.Chapter is the mean of the first sample is the.
Chapter 8 Parameter Estimates and Hypothesis Testing.
Chapter 10 The t Test for Two Independent Samples
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Introduction to ANOVA Research Designs for ANOVAs Type I Error and Multiple Hypothesis Tests The Logic of ANOVA ANOVA vocabulary, notation, and formulas.
T tests comparing two means t tests comparing two means.
1 Testing Statistical Hypothesis The One Sample t-Test Heibatollah Baghi, and Mastee Badii.
Chapter 13 Understanding research results: statistical inference.
HYPOTHESIS TESTING FOR DIFFERENCES BETWEEN MEANS AND BETWEEN PROPORTIONS.
©2013, The McGraw-Hill Companies, Inc. All Rights Reserved Chapter 4 Investigating the Difference in Scores.
Inferential Statistics Psych 231: Research Methods in Psychology.
Statistical principles: the normal distribution and methods of testing Or, “Explaining the arrangement of things”
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
McGraw-Hill/Irwin © 2003 The McGraw-Hill Companies, Inc.,All Rights Reserved. Part Four ANALYSIS AND PRESENTATION OF DATA.
Chapter 9 Introduction to the t Statistic
The 2 nd to last topic this year!!.  ANOVA Testing is similar to a “two sample t- test except” that it compares more than two samples to one another.
Slides to accompany Weathington, Cunningham & Pittenger (2010), Chapter 11: Between-Subjects Designs 1.
Lecture Nine - Twelve Tests of Significance.
Math 4030 – 10b Inferences Concerning Variances: Hypothesis Testing
Inferential Statistics
Interval Estimation and Hypothesis Testing
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
Presentation transcript:

PTP 560 Research Methods Week 9 Thomas Ruediger, PT

Calculate Sn, Sp, LRs

Confidence Intervals with small samples  What is small sample size?  Less than 30 (one of those special numbers in stats)  Sampling distribution tends to spread out  Standard normal curve not adequate  Use the t-distributions  Theoretical sampling distributions  Flatter peak, wider at the tails  Approaches normal curve as sample size increases  Use values of t instead of z  Described by degrees of freedom (n-1 for confidence intervals)

Hypothesis testing Are the differences – Representative of “real” effects? – Just by chance? Null hypothesis ( H O ) – Means are not different – Stated in terms of population parameter – μ A =μ B Alternative hypothesis (H 1 ) – Difference too large to be by chance – μ A ≠μ B – May be directional or non-directional

Truth Decision H o is True Do Not Reject H o α Correct β Reject H o H o is False Correct Type I error Type II error

Type I Error Significance Level (Alpha level, α level) Your choice of how much risk you are willing to take of saying there is a difference when there really is no difference Set this before the study – Conventionally is 0.05 This is arbitrary, but almost always what we choose – Choose the level based on the Type I error concern

Type I Error Probability Values (Evaluated after the study) – Probability of finding this big a difference by chance p =.07 of this big a difference by chance – You are not stating the probability of the inverse p =.93 that it is real difference is not appropriate – Compare p-value to alpha level if greater – Compare your p-value (calculated after the study) with your α level (set before the study) – If the p-value is less than α, reject the null – If the p-value is greater than α, fail to reject the null

Type II Error Statistical Power Beta (β) – Probability of failing to reject a false H O(null hypotheses) – Β of 0.20 is 20% chance we will make a Type II error Statistical Power – Complement of β (not compliment) – In this example 0.80 (1.00 – 0.20 = 0.80) – 80% probability of correctly rejecting the null Before: a priori - power used to determine sample size After: post hoc – power reported if H O not rejected “If there was a difference, could we have found it?”

Determinants of Statistical Power Significance Criterion – As α increases, power increases (As α increases from 0.05 to 0.10, power increases) Variance – As variance decreases, power increases Sample size – As sample size increases, power increases Effect size (difference b/w the group means) – As effect size increases, power increases

z- z - score represents the distance between: – A sample score and – Sample mean – Divided by the standard deviation You will see this in osteoporosis scores (+2 for z- score is 2 SD away from a healthily woman mean) z - ratio represents the distance between: – A sample mean and – Population mean – Divided by the standard error of the mean

Critical Region  That portion of the curve above and below z  If calculated z > critical z, reject H O  One or two tailed test?  Non directional hypothesis– two tailed  z of 2.00 encompasses % (non-critical)  4.66% is the critical area  z of 1.96 encompass 95%,  Critical region is 5%, 2.5% in each tail of a non-directional test  Directional hypothesis– one tailed  z of encompasses 95%  Critical region is 5%, all in one tail of a directional test, while NON-direction will be 2.5%  Practically, you are disregarding everything in the other tail  Table A.1 back of P & W

Figure A: Intervention 1 is different than Intervention 2 Figure B: Intervention 1 is less effective than Intervention 2

Parametric Statistics Used to estimate population parameters Based on assumptions – Randomly drawn from a normally distributed population – Variances in the samples equal (at least roughly) – Interval or ratio scale Classically, if assumptions violated, use non- parametric tests Many view parametric stats as Robust enough to withstand even major violation

t-test  Examines two means  Two groups  Two conditions/two performances  Statistical significance based on  Difference in the means  Between the groups  The effect size  Variance  Within the groups  How variable are the scores Fig 19.1

t-test  Based on a ratio  Difference between group means/Variability within groups  Difference between means  Treatment effect and error variance  One mean- second mean and variability between the groups. In both the numerator and denominator of t-test ratio, so holds it to zero.  Variability within groups  Error variance alone  Equal and unequal variances affect t-ratio  SPSS and most other packages automatically test for this. Where?  Ratio can be written: Treatment effect and error variance/Error variance NOTE: Error variance  Not mistakes  Is anything that is not due to the independent variable

t-test  If the null is true  Ratio reduces to: Error/Error  The bigger the difference - The bigger the ratio How does the ratio get bigger?  This ratio is compared to the critical  Determines significance  Is the ratio significant?  Based on critical value (but now t instead of z)  Entering arguments (Table A.2)  Alpha level (almost always 0.05  Degrees of freedom ( one or a few less than n )  CI can be constructed for where the true mean difference lies

t-test  Independent t-test  Usually random assignment  Can be convenience assignment  No inherent relationship between the groups  Degrees of freedom = total sample size – 2  Paired t-test  An inherent relationship between the groups  Self (repeated measure test)  Twins  Difference scores for each pair compared  Degrees of freedom = number of paired scores – 1  Find this in P & W or in an SPSS output table – don’t calculate for this class!

ANOVA  Examines three (or more) means  Three (or more) groups  Three (or more) conditions/ three (or more) performances  Statistical significance based on  Difference in the means  Between the groups  The effect size  Variance  Within the groups  How variable are the scores  This should sound familiar

ANOVA  Based on ratio  Treatment effect and error variance/Error variance  For ANOVA it is the F-ratio  Derived from the Sum of Squares (SS)  Larger the SS the larger the ______________?  Calculate SS (each score minus sample mean, square each result, sum them)  Then determine the Mean Square (MS)  MS b = SS b /df b (df b = one less than the number of groups)  MS e = SS e /df e (df e = total N – number of groups)  F statistic is the ratio = MS b /Ms e  Ratio of the between groups variance to error variance  Find this in P & W or in an SPSS output table – don’t calculate for this class!