Calculating Sample Size: Cohen’s Tables and G. Power

Slides:



Advertisements
Similar presentations
Introduction to Hypothesis Testing
Advertisements

Hypothesis testing 5th - 9th December 2011, Rome.
Statistical Issues in Research Planning and Evaluation
RIMI Workshop: Power Analysis Ronald D. Yockey
Correlation Chapter 9.
Lecture 9: One Way ANOVA Between Subjects
PSY 307 – Statistics for the Behavioral Sciences Chapter 19 – Chi-Square Test for Qualitative Data Chapter 21 – Deciding Which Test to Use.
Today Concepts underlying inferential statistics
Chapter 14 Inferential Data Analysis
Osama A Samarkandi, PhD-RN, NIAC BSc, GMD, BSN, MSN.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Inferential Statistics
Inferential statistics Hypothesis testing. Questions statistics can help us answer Is the mean score (or variance) for a given population different from.
Bivariate Linear Regression. Linear Function Y = a + bX +e.
Lecture 15 Basics of Regression Analysis
LEARNING PROGRAMME Hypothesis testing Intermediate Training in Quantitative Analysis Bangkok November 2007.
Comparing Means From Two Sets of Data
Basic Data Analysis. Levels of Scale Measurement & Suggested Descriptive Statistics.
Basic Statistics. Basics Of Measurement Sampling Distribution of the Mean: The set of all possible means of samples of a given size taken from a population.
Chapter 7 Statistical Issues in Research Planning and Evaluation.
Regression Analysis. Scatter plots Regression analysis requires interval and ratio-level data. To see if your data fits the models of regression, it is.
Statistical Power 1. First: Effect Size The size of the distance between two means in standardized units (not inferential). A measure of the impact of.
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Hypothesis testing Intermediate Food Security Analysis Training Rome, July 2010.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Introduction to Inferential Statistics Statistical analyses are initially divided into: Descriptive Statistics or Inferential Statistics. Descriptive Statistics.
Statistical Power The power of a test is the probability of detecting a difference or relationship if such a difference or relationship really exists.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Introduction to Statistics Osama A Samarkandi, PhD, RN BSc, GMD, BSN, MSN, NIAC Deanship of Skill development Dec. 2 nd -3 rd, 2013.
Inferential Statistics. The Logic of Inferential Statistics Makes inferences about a population from a sample Makes inferences about a population from.
Chapter Eight: Using Statistics to Answer Questions.
Chapter 6: Analyzing and Interpreting Quantitative Data
Introducing Communication Research 2e © 2014 SAGE Publications Chapter Seven Generalizing From Research Results: Inferential Statistics.
Kin 304 Inferential Statistics Probability Level for Acceptance Type I and II Errors One and Two-Tailed tests Critical value of the test statistic “Statistics.
Handout Six: Sample Size, Effect Size, Power, and Assumptions of ANOVA EPSE 592 Experimental Designs and Analysis in Educational Research Instructor: Dr.
Chapter 13 Understanding research results: statistical inference.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN RESEARCH STATISTICS.
Introduction to Power and Effect Size  More to life than statistical significance  Reporting effect size  Assessing power.
Six Easy Steps for an ANOVA 1) State the hypothesis 2) Find the F-critical value 3) Calculate the F-value 4) Decision 5) Create the summary table 6) Put.
Descriptive Statistics Report Reliability test Validity test & Summated scale Dr. Peerayuth Charoensukmongkol, ICO NIDA Research Methods in Management.
Methods of Presenting and Interpreting Information Class 9.
CHAPTER 15: THE NUTS AND BOLTS OF USING STATISTICS.
Regression Analysis.
Hypothesis Testing.
Dr. Amjad El-Shanti MD, PMH,Dr PH University of Palestine 2016
Inference and Tests of Hypotheses
Significance testing Introduction to Intervention Epidemiology
Dr.MUSTAQUE AHMED MBBS,MD(COMMUNITY MEDICINE), FELLOWSHIP IN HIV/AIDS
APPROACHES TO QUANTITATIVE DATA ANALYSIS
Multiple Regression.
Regression Analysis.
12 Inferential Analysis.
Introduction to Inferential Statistics
Kin 304 Inferential Statistics
Introduction to Statistics
12 Inferential Analysis.
UNDERSTANDING RESEARCH RESULTS: STATISTICAL INFERENCE
Chapter 12 Power Analysis.
Product moment correlation
Inferential Statistics
15.1 The Role of Statistics in the Research Process
Power analysis Chong-ho Yu, Ph.Ds..
Psych 231: Research Methods in Psychology
Understanding Statistical Inferences
Research Methods in Management
Chapter Nine: Using Statistics to Answer Questions
Inferential testing.
EE, NCKU Tien-Hao Chang (Darby Chang)
Presentation transcript:

Calculating Sample Size: Cohen’s Tables and G. Power Calculating Sample Size: Cohen’s Tables and G*Power. A practical example

Calculating Sample Size Males 22% Work on Management positions. 3 in 5 We found some descriptive statistics Variable One 14% Variable Two 22% Variable Three 31% Variable Four 34% Variable Five 39% 275 Subjects live in a rural area . 29% Of some other statistics. Females 43% Work on management positions 14% Variable One 22% Variable Two 31% Variable Three 34% Variable Four 39% Variable Five 2 in 5 Additional Descriptive statistics 80% More descriptive statistics 450 Subjects live in the city.

Calculating Sample Size Ouline: General Research Proposal Scenario Cohen’s d effect concept Pearson’s r effect concept Type I and Type II errors Cohen’s d tables Calculating sample size Pearson’s r tables G*Power Tool Linear Regression a priori ANOVA a priori ANOVA post hoc Questions?

Calculating Sample Size Common Scenario on Proposals on URM (Pre QRM) or Statistic Classes: “I am conducting a correlational design and my chosen sample size is 25 subject” (no explanations provided) My typical answer: The sample size is something that we cannot just arbitrarily select, but must calculated based on our type of tests, the expected power, and the expected effect. The size, the power, and the effect are intimately related. Also, the specific tests to be performed play a role in this calculation (For example factor analysis). About effect size: An effect size is simply an objective and (usually) standardized measure of the magnitude of observed effect. The fact that the measure is standardized just means that we can compare effect sizes across different studies that have measured different variables . . . Many measures of effect size have been proposed, the most common of which are Cohen's d, Pearson's correlation coefficient r and the odds ratio" (Field, 2009, p. 57)  Effect is very important because in addition to our test being significant, we can test "how significant' is the effect. There are many tools and tables to calculate the effect size.

Sample 4 Cohen’s d Cohen’s d The Cohen’s effect size is used as a complement to the significance test to show the magnitude of that significance or to represent the extent to which a null hypothesis is false. This calculation shows an estimated to calculate the size of observed differences between groups: small, medium or large. “Cohen's d statistic represents the standardized mean differences between groups. Similar to other means of standardization such as z scoring, the effect size is expressed in standard score units” (Salkind, 2010, p. 2) In general, Cohen's d is defined as where d represents the effect size, μ1 and μ2 represent the two population means, and σ∊ represents the pooled within-group population standard deviation, but in practice we use the sample data means. Cohens’ suggestions about what constitutes a large, medium or large effects are: d = 0.2 (small), d = 0.5 (medium) d = 0.8 (large).

Sample 4 Pearson’s r Pearson’s r Pearson’s r “correlation coefficient” that is typically known as the measure of relationships between continuous variables, can also be used to quantify the differences in means between two groups (similar to Cohen’s d). Cohen’s also suggested some common sizes (Field, 2017) r = 0.10 (small effect): In this case the effect explains 1% of the total variance. r = 0.30 (medium effect): The effect accounts for 9% of the total variance. r = 0.50 (large effect): The effect accounts for 25% of the variance.

Type I and Type II Errors Sample 4 Type I and Type II Errors A Type I error (or false positive) is when we believe that there is a genuine effect when it is not. The opposite (or false negative) is when we believe that there is no effect where in reality there is. The most common acceptable probability of this error is .2 (or 20%) and it is called the β-level. This means that if we took 100 samples (in which the effect exists) we will fail to detect the effect in 20 of those samples. (Field, 2017).   “The power of a test is the probability that a given test will find an effect assuming that one exists in the population. This is the opposite of the probability that a given test will not find an effect assuming that one exists in the population, which, as we have seen, is the β-level (i.e., Type II error rate” (Field, 2017, p. 47). The problem with the significance (whether is .01, .05, or .10 ) is that does not tell us the importance of the effect, but we can measure the size of the effect in a standardized way. So the effect size is an standardized measure of the magnitude of the observed effect

Type I and Type II Errors Sample 4 Type I and Type II Errors

Calculating Sample Size The p value can be a false positive and to avoid that we can decrease the significance level TYPE I and TYPE II error . p value significance

Calculating Sample Size If we decreased the significant value, then the false positive can be converted into a false negative. So, it is always a trade off TYPE I and TYPE II error significance p value

Calculating Sample Size TYPE I and TYPE II error . Power is often expressed as 1 − β, where β represents the likelihood of committing a Type II error (i.e., the probability of incorrectly retaining the null hypothesis). Betas can range from .00 to 1.00. When the beta is very small (close to .00), the statistical test has the most power. For example, if the beta equals .05, then statistical power is .95. Multiplying statistical power by 100 yields a power estimate as a percentage. Thus, 95% power (1 − β = .95 × 100%) suggests that there is a 95% probability of correctly finding a significant result if an effect exists (Christopher & Nyaradzo, 2010, p. 3)

Calculating Sample Size using Cohen’s Tables Using d Effects Sample 4

Calculating Sample Size using Cohen’s Tables Using d Effects

Calculating Sample Size using Cohen’s Tables Using r Effects Sample 4

Calculating Sample Size using Cohen’s Tables Using r Effects Sample 4 See Next Slide

Calculating Sample Size using Cohen’s Tables Using d Effects Sample 4

Calculating Sample Size using G*Power G*Power Download http://www.G*Power.hhu.de/ G*Power Manual http://www.G*Power.hhu.de/fileadmin/redaktion/Fakultaeten/Mathematisch- Naturwissenschaftliche_Fakultaet/Psychologie/AAP/G*Power/G*PowerManual.pdf https://youtu.be/Kvz5AHFBEvQ G*Power F-test: Linear Multiple Regression, Fixed Model, R-squared deviation from zero Using G*Power to calculate Sample Size (A Priori) HD MANOVA special Effects and Interactions http://youtu.be/aOnZKEj3Wmg

Calculating Sample Size Small effect size .15, 0.05 significance, .80 power and 4 predictors. Minimum recommended sample size is 85

Calculating Sample Size Small effect size .15, 0.05 significance, .80 power 30 groups. Minimum recommended sample size is 2283

Calculating Sample Size Post Hoc (After we collected the sample/data0 Small effect size .15, 0.05 significance, 70 subjects collected with 4 predictor we obtained a power of .70

Calculating Sample Size References Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). New York: Lawrence Erlbaum Associates.   Christopher, A. S., & Nyaradzo, H. M. (2010). Statistical Power, Sampling, and Effect Sizes: Three Keys to Research Relevancy. Counseling Outcome Research and Evaluation, 1(2), 1-18. doi:10.1177/2150137810373613 Field, A. (2017). Discovering statistics using SPSS (5th ed.). Thousand Oaks, CA: Sage Publications. Salkind, N. (2010). Encyclopedia of Research Design. doi:10.4135/9781412961288