Statistics in Science   Sample Size Determination for Efficient use of Resources PGRM 3.5.

Slides:



Advertisements
Similar presentations
Designing Experiments: Sample Size and Statistical Power Larry Leamy Department of Biology University of North Carolina at Charlotte Charlotte, NC
Advertisements

LSU-HSC School of Public Health Biostatistics 1 Statistical Core Didactic Introduction to Biostatistics Donald E. Mercante, PhD.
Hypothesis Testing making decisions using sample data.
Statistical Issues in Research Planning and Evaluation
STAT 135 LAB 14 TA: Dongmei Li. Hypothesis Testing Are the results of experimental data due to just random chance? Significance tests try to discover.
Topic 6: Introduction to Hypothesis Testing
Using Statistics in Research Psych 231: Research Methods in Psychology.
Statistics for the Social Sciences
Sample size computations Petter Mostad
Cal State Northridge  320 Ainsworth Sampling Distributions and Hypothesis Testing.
Hypothesis Testing: Type II Error and Power.
Understanding Statistics in Research
Understanding Research Results. Effect Size Effect Size – strength of relationship & magnitude of effect Effect size r = √ (t2/(t2+df))
PSY 1950 Confidence and Power December, Requisite Quote “The picturing of data allows us to be sensitive not only to the multiple hypotheses that.
k r Factorial Designs with Replications r replications of 2 k Experiments –2 k r observations. –Allows estimation of experimental errors Model:
Independent Sample T-test Often used with experimental designs N subjects are randomly assigned to two groups (Control * Treatment). After treatment, the.
BCOR 1020 Business Statistics Lecture 18 – March 20, 2008.
PY 427 Statistics 1Fall 2006 Kin Ching Kong, Ph.D Lecture 5 Chicago School of Professional Psychology.
BCOR 1020 Business Statistics Lecture 20 – April 3, 2008.
Using Statistics in Research Psych 231: Research Methods in Psychology.
“There are three types of lies: Lies, Damn Lies and Statistics” - Mark Twain.
The t Tests Independent Samples.
STAT 3130 Statistical Methods II Session 6 Statistical Power.
Inferential Statistics
Introduction to Testing a Hypothesis Testing a treatment Descriptive statistics cannot determine if differences are due to chance. A sampling error occurs.
Hypothesis Testing and T-Tests. Hypothesis Tests Related to Differences Copyright © 2009 Pearson Education, Inc. Chapter Tests of Differences One.
Dr. Tom Kuczek Purdue University. Power of a Statistical test Power is the probability of detecting a difference in means under a given set of circumstances.
Section 10.1 ~ t Distribution for Inferences about a Mean Introduction to Probability and Statistics Ms. Young.
Experimental Design Dr. Anne Molloy Trinity College Dublin.
Chapter 7 Statistical Issues in Research Planning and Evaluation.
Sample size determination Nick Barrowman, PhD Senior Statistician Clinical Research Unit, CHEO Research Institute March 29, 2010.
Jan 17,  Hypothesis, Null hypothesis Research question Null is the hypothesis of “no relationship”  Normal Distribution Bell curve Standard normal.
1 Power and Sample Size in Testing One Mean. 2 Type I & Type II Error Type I Error: reject the null hypothesis when it is true. The probability of a Type.
Elementary Statistical Methods André L. Souza, Ph.D. The University of Alabama Lecture 22 Statistical Power.
Education Research 250:205 Writing Chapter 3. Objectives Subjects Instrumentation Procedures Experimental Design Statistical Analysis  Displaying data.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
Statistical Power The power of a test is the probability of detecting a difference or relationship if such a difference or relationship really exists.
5.1 Chapter 5 Inference in the Simple Regression Model In this chapter we study how to construct confidence intervals and how to conduct hypothesis tests.
Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 8 th Edition Chapter 9 Hypothesis Testing: Single.
Medical Statistics as a science
Sample Size Determination Text, Section 3-7, pg. 101 FAQ in designed experiments (what’s the number of replicates to run?) Answer depends on lots of things;
BPS - 3rd Ed. Chapter 141 Tests of significance: the basics.
Chapter 8 Parameter Estimates and Hypothesis Testing.
Stats Lunch: Day 3 The Basis of Hypothesis Testing w/ Parametric Statistics.
1 URBDP 591 A Lecture 12: Statistical Inference Objectives Sampling Distribution Principles of Hypothesis Testing Statistical Significance.
Inferential Statistics Significance Testing Chapter 4.
Hypothesis Testing and the T Test. First: Lets Remember Z Scores So: you received a 75 on a test. How did you do? If I said the mean was 72 what do you.
1 Probability and Statistics Confidence Intervals.
Chapter 7 Statistical Issues in Research Planning and Evaluation.
Chapter ?? 7 Statistical Issues in Research Planning and Evaluation C H A P T E R.
Hypothesis Tests. An Hypothesis is a guess about a situation that can be tested, and the test outcome can be either true or false. –The Null Hypothesis.
MSE 600 Descriptive Statistics Chapter 11 & 12 in 6 th Edition (may be another chapter in 7 th edition)
Inferential Statistics Psych 231: Research Methods in Psychology.
Introduction to Power and Effect Size  More to life than statistical significance  Reporting effect size  Assessing power.
Lecture 9-I Data Analysis: Bivariate Analysis and Hypothesis Testing
Statistical Core Didactic
Hypothesis Testing.
Inferences About Means from Two Groups
What does it mean to say that the results of an experiment are (or are not) statistically significant? The significance level,  (conventionally set to.
Statistical Analysis of Data
Chapter 9 Hypothesis Testing.
One Way ANOVAs One Way ANOVAs
Quiz 1. A confidence interval is a range of values around a measure mean that: A. has a known probability of containing the true population mean. B.
Psych 231: Research Methods in Psychology
Psych 231: Research Methods in Psychology
More About Tests Notes from
AP Biology: Standard Deviation and Standard Error of the Mean
Psych 231: Research Methods in Psychology
Power Problems.
Statistical Power.
Presentation transcript:

Statistics in Science   Sample Size Determination for Efficient use of Resources PGRM 3.5

Statistics in Science   4 determining factors: A – D Variability of experimental material Expressed as either (i) the Standard deviation (SD) of the response (ii) the CV (= 100*SD/Mean). The CV for biological responses is often in range 10-30%. Size of difference expected (d) Based on (i) Knowledge of similar work. (ii) Knowledge of the science (iii) economically important difference? A B

Statistics in Science   Estimating the SD & CV (for A) From analysis of similar data: SD is estimated by √MSE From the literature: SEM (SE of mean) = √(MSE/r) so SD = SEM × √r Example: Chowdhury and Rosario (1994) J. Agric. Sci. Camb 122, Randomised block with 5 blocks (r=5) SEM for Dry matter yield = SD estimated by × 5 = Mean ≈ 5 so CV ≈ 100 × 0.460/5 = 9.2%

Statistics in Science   Estimating the SD & CV (contd) Example: Wayne el al. (1999).J. Ecol 87, Replication = 6 SED for reproductive weight per stand is Recall! SED = √2 × SEM soSD = √(r/2) × SED SD estimated by × √(6/2) = 0.51 Mean ≈ 1.2 CV ≈ 100 × 0.51/1.2 = 42.5%

Statistics in Science   More determining factors: C & D Criteria for rejecting the null hypothesis Significance Level = Probability of rejecting the null when it is true. (ie concluding there is a difference when there is not) Recall: rejecting when p < 0.05 gives significance level 0.05 Typical levels : 0.05, 0.01, Power = Probability of concluding there is a difference when there is one of size d Typical levels: , 0.95 D C

Statistics in Science   Calculation of replicates per treatment Fixing significance at 0.05, and power at 80% To detect a d% difference the required replication per treatment is: r = 16(CV/d) 2 Example CV = 15%, d = 10%, r = 16 (15/10) 2 = 36

Statistics in Science   Review of Resource Use To see how precise an experiment actually was the formula above can be rewritten as d = 4 CV/r to give d = 4 SEM% (= 2.82 SED%) where SEM (SED) are expressed as % of the overall mean. Example follows:

Statistics in Science   Example: Review of resource use Suppose an experiment with two treatments has the following result Treatment 1 2SEM The grand mean is 12.3 and the SEM as a percentage of that is 8.9%. The formula says that a real underlying difference between treatments of size 4 x 8.9% = 35.6% would have about an 80% chance of being detected at the 5% level in this experiment.