Fundamentals of Data Analysis Lecture 7 ANOVA. Program for today F Analysis of variance; F One factor design; F Many factors design; F Latin square scheme.

Slides:



Advertisements
Similar presentations
Multiple-choice question
Advertisements

Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 10 The Analysis of Variance.
Copyright 2004 David J. Lilja1 Comparing Two Alternatives Use confidence intervals for Before-and-after comparisons Noncorresponding measurements.
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Hypothesis Testing Steps in Hypothesis Testing:
i) Two way ANOVA without replication
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
Fundamentals of Data Analysis Lecture 8 ANOVA pt.2.
Design of Experiments and Analysis of Variance
ANOVA: ANalysis Of VAriance. In the general linear model x = μ + σ 2 (Age) + σ 2 (Genotype) + σ 2 (Measurement) + σ 2 (Condition) + σ 2 (ε) Each of the.
Chapter 3 Analysis of Variance
PSY 307 – Statistics for the Behavioral Sciences
Chapter 2 Simple Comparative Experiments
Analysis of Variance & Multivariate Analysis of Variance
Chapter 9 - Lecture 2 Computing the analysis of variance for simple experiments (single factor, unrelated groups experiments).
Statistical Analysis. Purpose of Statistical Analysis Determines whether the results found in an experiment are meaningful. Answers the question: –Does.
F-Test ( ANOVA ) & Two-Way ANOVA
PS 225 Lecture 15 Analysis of Variance ANOVA Tables.
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
Two Sample Tests Ho Ho Ha Ha TEST FOR EQUAL VARIANCES
Statistical Analysis Statistical Analysis
QNT 531 Advanced Problems in Statistics and Research Methods
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 2 – Slide 1 of 25 Chapter 11 Section 2 Inference about Two Means: Independent.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses.
Today’s lesson Confidence intervals for the expected value of a random variable. Determining the sample size needed to have a specified probability of.
Analysis of Variance ( ANOVA )
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
Chapter 9 Hypothesis Testing and Estimation for Two Population Parameters.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Fundamentals of Data Analysis Lecture 10 Management of data sets and improving the precision of measurement pt. 2.
© Copyright McGraw-Hill CHAPTER 12 Analysis of Variance (ANOVA)
PSY 307 – Statistics for the Behavioral Sciences Chapter 16 – One-Factor Analysis of Variance (ANOVA)
Chapter 10 Analysis of Variance.
Ch9. Inferences Concerning Proportions. Outline Estimation of Proportions Hypothesis concerning one Proportion Hypothesis concerning several proportions.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Inference for Regression Simple Linear Regression IPS Chapter 10.1 © 2009 W.H. Freeman and Company.
Comparing Three or More Means ANOVA (One-Way Analysis of Variance)
Essential Question:  How do scientists use statistical analyses to draw meaningful conclusions from experimental results?
CHAPTER 4 Analysis of Variance One-way ANOVA
Previous Lecture: Phylogenetics. Analysis of Variance This Lecture Judy Zhong Ph.D.
Analysis of Variance.
1 ANALYSIS OF VARIANCE (ANOVA) Heibatollah Baghi, and Mastee Badii.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 14 th February 2013.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 11: Models Marshall University Genomics Core Facility.
- We have samples for each of two conditions. We provide an answer for “Are the two sample means significantly different from each other, or could both.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 1 – Slide 1 of 26 Chapter 11 Section 1 Inference about Two Means: Dependent Samples.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Doc.Ing. Zlata Sojková,CSc.1 Analysis of Variance.
T tests comparing two means t tests comparing two means.
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
Fundamentals of Data Analysis Lecture 4 Testing of statistical hypotheses pt.1.
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
 List the characteristics of the F distribution.  Conduct a test of hypothesis to determine whether the variances of two populations are equal.  Discuss.
DSCI 346 Yamasaki Lecture 4 ANalysis Of Variance.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Marshall University School of Medicine Department of Biochemistry and Microbiology BMS 617 Lecture 10: Comparing Models.
Design Lecture: week3 HSTS212.
CHAPTER 3 Analysis of Variance (ANOVA) PART 1
DTC Quantitative Methods Bivariate Analysis: t-tests and Analysis of Variance (ANOVA) Thursday 20th February 2014  
i) Two way ANOVA without replication
CHAPTER 3 Analysis of Variance (ANOVA)
What are their purposes? What kinds?
Chapter 10 – Part II Analysis of Variance
Statistical Inference for the Mean: t-test
Presentation transcript:

Fundamentals of Data Analysis Lecture 7 ANOVA

Program for today F Analysis of variance; F One factor design; F Many factors design; F Latin square scheme.

Analysis of variance In advanced empirical studies basic method of scientific hypothesis testing is an experiment. The simplest type of experiments we are dealing if we are interested in (as it was before) the impact of one factor on the course of the experiment. More complex are the experiments with many interacting factors. In such experiment, objects corresponding to the values ​​ (levels) of the test factor and objects of the experiment (samples) can be variably assigned to each In accordance with the established criterion creating a specific types of experimental systems : simple systems (eg. a completely randomized system ), układy blokowe (eg. a complete randomized blocks system) and row-column systems (eg. latin square system). In each of these systems process of mapping ​​ of samples and values of factors is totally random.

Analysis of variance The new methodology of experimental research, and more specifically the planning of the experiment based on the analysis of variance, proposed by Ronald A. Fisher, was initially used in agriculture. It allows you to manipulate more than one independent variable at the same time, it allows a significant expansion of a coverage of generalization of the experimental conclusions. The most important thing is that this method takes into account the effect of the combined action of two or more independent variables on the dependent variable.

Analysis of variance Test of analysis of variance may be used when the population distributions are normal or close to normal and have equal variances. It may in fact occur so that all populations have normal distributions and equal variances, but differ in the mean values :

Analysis of variance The essence of the analysis of variance is split into additive components (the number of which results from the experiment) of the sum of squares of the whole set of results. Comparison of individual variance resulting from the action of a factor and the so-called residual variance, which measures the random error variance (using Snedecor F test) gives the answer whether a particular factor plays an important role in the formation of the results of the experiment.

Analysis of variance For the purpose of practice experimental statistics already developed a lot of experimental design techniques, such as : randomized blocks, latin squares, factor analysis. Each type of experiment has its own scheme of analysis of variance.

One factor design When we compare the average of more than two samples, using parametric tests, we need to compare all mean pairs, which is very tedious and time- consuming. Another way is to use the Snedecor’s F statistics. Data from several samples or data from one sample grouped according to some criterion is denoted by x kl, where k means population (group) number and l number of observation (measurement the number of elements in each group may be different. We set the null hypothesis that there is no difference between the mean of the populations of which samples have been get: H 0 : m 1 = m 2 =... = m n.

One factor design Estimator of the variance of the i-th sample of the size n i is :

One factor design Overall variance can be estimated as the average of the samples variance. Therefore, the variance estimated from the samples variance (ie the variance within groups) is N-k degrees of freedom and is described relationship :

One factor design The variance between the two groups is given by :

One factor design F statistic is calculated so that it is larger than unity, th at is, for  w 2 >  m 2 we have and when  w 2 <  m 2 we have

One factor design – null hypotehesis 1°. H 0 : H 1 : 2°. H 0 : H 1 : 3°. H 0 : H 1 : 4°. H 0 : H 1 : where  i means the i-th effect of factor A affecting the studied feature X and  i = (m i - m);

One factor design - verification of the hypothesis of equality of average From each of the tested populations we grab the sample, with size n i (i = 1,..., k). Therefore x ij was j-th result in i-th sample, the mean value of i-th sample is equal to: whereas the overall average :

One factor design - verification of the hypothesis of equality of average The sum of squared deviations of each observation x ij of the total average (denoted by the symbol q) can be expressed as a sum of two components: First component (denoted by the symbol q R ) is called the total variation within groups(or residual sum of the squares), thus the second one (denoted by the symbol q G ) is called the sum of squares between groups.

One factor design - verification of the hypothesis of equality of average The corresponding random variables Q, Q R i Q G, divided by the appropriate degrees of freedom are unknown variance estimators  2. However, the random variable has Snedecor’s F distribution with (k-1, n-k) degree of freedom.

One factor design - verification of the hypothesis of equality of average Example We measured the length of the three types of light bulbs, yielding the following times in hours : Type 1: 1802, 1992, 1854, 1880, 1761, 1900; Type 2: 1664, 1755, 1823, 1862; Type 3: 1877, 1710, 1882, 1720, With the confidence level  = 95% to test the hypothesis that values of the average time of all types of light bulbs are the same (alternative hypothesis is that these values ​​ are not the same).

One factor design - verification of the hypothesis of equality of average Example First, using the Bartlett's test (because the samples are of different sizes) we must verify the hypothesis that the variances are equal for the three samples. From the Bartlett test calculations, we obtain with a confidence level  = 95% there is no reason to reject the hypothesis of equality of variances.

One factor design - verification of the hypothesis of equality of average Example therefore : Results of calculations :

One factor design - verification of the hypothesis of equality of average Example From Snedecor’s distribution tables we read F( , k-1, n-k) = F(0.95, 2, 12) = 3.88, therefore <3.88, +  ) is the critical interval. The calculated value does not belong to this range. So there is no reason to reject verified hypothesis of equality averages of the three trials.

One factor design - verification of the hypothesis of equality of average Exercise Each of the three micrometers measured 5 times the same object, yielding results: 1 micrometer: 4.077, 4.084, 4.078, 4.085, 4.082; 2 micrometer: 4.079, 4.086, 4.080, 4.070, 4.081; 3 micrometer: 4.075, 4.069, 4.093, 4.071, Assuming that the measurement errors are normally distributed with the same variance, test the hypothesis that the average values ​​ obtained in these micrometer measurements are the same.

Thanks for attention! Books: W. Wagner, P. Błażczak, Statystyka matematyczna z elementami doświadczalnictwa, cz. 2, AR, Poznań T. M. Little, F.J. Hills, Agricultural experimentation. Design and analysis, Wiley and Sons, New York, 1987.