Analysis of Variance.

Slides:



Advertisements
Similar presentations
What is Chi-Square? Used to examine differences in the distributions of nominal data A mathematical comparison between expected frequencies and observed.
Advertisements

Multiple Regression W&W, Chapter 13, 15(3-4). Introduction Multiple regression is an extension of bivariate regression to take into account more than.
Inference for Regression
1 Hypothesis testing. 2 A common aim in many studies is to check whether the data agree with certain predictions. These predictions are hypotheses about.
Model Adequacy Checking in the ANOVA Text reference, Section 3-4, pg
Analysis of Variance (ANOVA) ANOVA can be used to test for the equality of three or more population means We want to use the sample results to test the.
Design of Experiments and Analysis of Variance
1 1 Slide © 2009, Econ-2030 Applied Statistics-Dr Tadesse Chapter 10: Comparisons Involving Means n Introduction to Analysis of Variance n Analysis of.
Chapter 10 Simple Regression.
Statistics Are Fun! Analysis of Variance
Lesson #23 Analysis of Variance. In Analysis of Variance (ANOVA), we have: H 0 :  1 =  2 =  3 = … =  k H 1 : at least one  i does not equal the others.
Chapter 3 Analysis of Variance
Chapter 3 Experiments with a Single Factor: The Analysis of Variance
Final Review Session.
Experimental Design Terminology  An Experimental Unit is the entity on which measurement or an observation is made. For example, subjects are experimental.
Chapter 11 Multiple Regression.
Aaker, Kumar, Day Seventh Edition Instructor’s Presentation Slides
Inferences About Process Quality
Analysis of Variance & Multivariate Analysis of Variance
Today Concepts underlying inferential statistics
Statistical Methods in Computer Science Hypothesis Testing II: Single-Factor Experiments Ido Dagan.
Chapter 12 Inferential Statistics Gay, Mills, and Airasian
Analysis of Variance (ANOVA) Quantitative Methods in HPELS 440:210.
Aaker, Kumar, Day Ninth Edition Instructor’s Presentation Slides
Fundamentals of Data Analysis Lecture 7 ANOVA. Program for today F Analysis of variance; F One factor design; F Many factors design; F Latin square scheme.
1 1 Slide © 2006 Thomson/South-Western Slides Prepared by JOHN S. LOUCKS St. Edward’s University Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 Tests with two+ groups We have examined tests of means for a single group, and for a difference if we have a matched sample (as in husbands and wives)
1 1 Slide © 2005 Thomson/South-Western Chapter 13, Part A Analysis of Variance and Experimental Design n Introduction to Analysis of Variance n Analysis.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 13 Experimental Design and Analysis of Variance nIntroduction to Experimental Design.
PROBABILITY & STATISTICAL INFERENCE LECTURE 6 MSc in Computing (Data Analytics)
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Chapter 10 Analysis of Variance.
Between-Groups ANOVA Chapter 12. >When to use an F distribution Working with more than two samples >ANOVA Used with two or more nominal independent variables.
1 Chapter 13 Analysis of Variance. 2 Chapter Outline  An introduction to experimental design and analysis of variance  Analysis of Variance and the.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
One-Way ANOVA ANOVA = Analysis of Variance This is a technique used to analyze the results of an experiment when you have more than two groups.
Design Of Experiments With Several Factors
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005 Dr. John Lipp Copyright © Dr. John Lipp.
Educational Research Chapter 13 Inferential Statistics Gay, Mills, and Airasian 10 th Edition.
Interval Estimation and Hypothesis Testing Prepared by Vera Tabakova, East Carolina University.
Learning Objectives Copyright © 2002 South-Western/Thomson Learning Statistical Testing of Differences CHAPTER fifteen.
Chapter 14 – 1 Chapter 14: Analysis of Variance Understanding Analysis of Variance The Structure of Hypothesis Testing with ANOVA Decomposition of SST.
Chapter 13 - ANOVA. ANOVA Be able to explain in general terms and using an example what a one-way ANOVA is (370). Know the purpose of the one-way ANOVA.
Previous Lecture: Phylogenetics. Analysis of Variance This Lecture Judy Zhong Ph.D.
Chapter Seventeen. Figure 17.1 Relationship of Hypothesis Testing Related to Differences to the Previous Chapter and the Marketing Research Process Focus.
Multiple Regression. Simple Regression in detail Y i = β o + β 1 x i + ε i Where Y => Dependent variable X => Independent variable β o => Model parameter.
Copyright © Cengage Learning. All rights reserved. 12 Analysis of Variance.
MARKETING RESEARCH CHAPTER 17: Hypothesis Testing Related to Differences.
Chapter 12 Introduction to Analysis of Variance PowerPoint Lecture Slides Essentials of Statistics for the Behavioral Sciences Eighth Edition by Frederick.
Doc.Ing. Zlata Sojková,CSc.1 Analysis of Variance.
1 1 Slide The Simple Linear Regression Model n Simple Linear Regression Model y =  0 +  1 x +  n Simple Linear Regression Equation E( y ) =  0 + 
Significance Tests for Regression Analysis. A. Testing the Significance of Regression Models The first important significance test is for the regression.
Jump to first page Inferring Sample Findings to the Population and Testing for Differences.
SUMMARY EQT 271 MADAM SITI AISYAH ZAKARIA SEMESTER /2015.
Educational Research Inferential Statistics Chapter th Chapter 12- 8th Gay and Airasian.
CHAPTER 10: ANALYSIS OF VARIANCE(ANOVA) Leon-Guerrero and Frankfort-Nachmias, Essentials of Statistics for a Diverse Society.
Chi-square test.
Copyright © 2008 by Hawkes Learning Systems/Quant Systems, Inc.
i) Two way ANOVA without replication
CHAPTER 3 Analysis of Variance (ANOVA)
Statistics for Business and Economics (13e)
Correlation and Simple Linear Regression
Correlation and Simple Linear Regression
One way ANALYSIS OF VARIANCE (ANOVA)
doc.Ing. Zlata Sojková,CSc.
Simple Linear Regression and Correlation
What are their purposes? What kinds?
Correlation and Simple Linear Regression
Presentation transcript:

Analysis of Variance

In practice it is often necessary to compare a large number of independent random selections in terms of level, we are interested in hypothesis: for at least one i (i = 1, 2,…m) for m > 2, when i , i =1, 2, …m are mean values of normally distributed populations with equal variances 2 , t.j. N(, 2) To verify this hypothesis is used important statistical method called Analysis of variance, abbreviated ANOVA (resp. AV)

AV is frequently used in the evaluation of biological experiments In practice is AV used for examination of the impact of one, or more factors (treatments) on the statistical sign. Factors are labeled A, B,…in AV they will be regarded as qualitative attributes with different variations – levels of factor Result will be quantitative statistical sign denoted Y AV is frequently used in the evaluation of biological experiments The most simple case is AV with single factor called One factor analysis of variance

Level of the factor refer to :  certain amount of quantitative factor, e.g. Amount of pure nutrients in manure, different income groups of households Certain kind of qualitative factor, e.g. different types of the same crop, methods of products placing in stores, AV is a generalization of Student's t-test for independent choices AV also examines the impact of qualitative factors resulting in a quantitative character -> analyzes the relationships between attributes

Scheme of single-factor experiment “balanced attempt” line average Repetition line sum A 1 2… j… n Yi . yi . 1 y11 y12 y1j y1n Y1. y1. 2 y21 y22 y2j y2n Y2. y2. … ……….. i yi1 yi2 yij yin Yi. yi. m ym1 ym2 ymj ymn Ym. ym. Y.. y.. Levels of the factor Total sum Overall average

Line sum: Total sum: Line average: Overall average:

Model for resulting observed value: where i = 1, 2,…, m j = 1,2,…, n  - expected values for all levels of the factor and observed values i - impact of i-th level of the factor A eij - random error, every measurement is biased, resp. impact of random factors

or Then we can formulate null hypothesis: Ho : 1 = 2 =… i = m = 0 -> effects of all levels of factor A are zero, insignificant, against the alternate hypothesis H1: i  0 for at least one i (i = 1,2…m) effect i at least one i – level of the factor is significant, => significantly different from zero

Estimates of parameters are sample characteristics: : What can be rewrited:

Comparison of two experiments with three levels of factor 3 1 2 1 2 3

Principle of the ANOVA Basic principle of the analysis of variance is decomposition of the total variability of the investigated sign. Sr Sc S1 Variability between levels of factor, caused by the action of factor A, “variability between groups” Random variability, residual, “variability within groups“ Total variability

s12 sr2 2 Degrees of freedom 3 Mean square (MS) (1/2) 1 Sum of squares 4 F critical Variability Variability between groups m-1 s12 S1 Variability within groups m.n - m sr2 Sr Total variability N-1= m .n-1 Sc

Test statistics for one factor ANOVA can be written: F value will be compared with appropriate table value for F-distribution: F , with (m-1) and (m.n - m) degrees of freedom

Decision about test result: If F calc  F. ((m-1,(N-m)) We reject H0, In that case is effect of at least one level of the factor significant, thus average level of the indicator is significantly different from others. => At least one effect i is significantly different from zero. If F calc  F Do not reject Ho F Acceptance regon Ho Rejection region H0

If null hypothesis is rejected: We found only that effect of the factor on examined attribute is significant. It is also necessary to identify levels of the factor, which are significantly different - for this purpose are used tests of contrasts Test of contrast: Duncan test, Scheffe test, Tuckey test and others…..

Terms of use AV: Samples have normal distribution, violating of this assumption has significant effect on the results of AV statistical independence of random errors eij Identical residual variances 12 = 22 = …. = 2 , t.j. D(eij) = 2 for all i = 1,2…., m, j=1,2, …n this assumption is more serious and can be verified by Cochran, resp. Bartlett test.

Scheme of single-factor experiment “unbalanced attempt” line average Different number of repetitions line sum A 1 2… j … ni Yi . yi . 1 y11 y12 y1j ... n1 Y1. y1. 2 y21 y22 y2j ... n2 Y2. y2. … ……….. i yi1 yi2 yij ... ni Yi. yi. m ym1 ym2 ymj ... nm Ym. ym. Y.. y.. Levels of the factor Where Overall average

s12 sr2 1 Sum of squares (SS) 3 Mean square (MS) (1/2) 4 F- critical 2 Degrees of freedom Variability Variability between groups m-1 s12 S1 Variability within groups N - m sr2 Sr Total variability N-1 S

Two-factor analysis of variance with one observation in each subclass Two-factor analysis of variance with one observation in each subclass.... TAV Consider the effect of factor A, which we investigate on the m - levels, i = 1,2, ...., m Then consider the effect of factor B, which is observed on n - levels , j = 1,2, …, n On every i-level of factor A and j-level of factor B we have only one observation (repetition) yij =>We are veryfying two null hypothesis

Scheme for Two-factor experiment with one observation in each subclass TAV Row average n- levels of factor B row sum B A 1 2 … j … n Yi . yi . 1 y11 y12 y1j y1n Y1. Y1. 2 y21 y22 y2j y2n Y2. y2. … ……….. i yi1 yi2 yij yin Yi. yi. m ym1 ym2 ymj ymn Ym. ym. Y.1 Y.2 ... Y.j ... Y.1 Y.. y.1 y.2 ... y.j ... y.1 y.. m-levels of factor A Overall average Column sum Column average

We are verifying the validity of two null hypothesis We can write model for examined attribute as follows: We are verifying the validity of two null hypothesis Hypothesis for factor A: Ho 1: 1 = 2 =… i = m = 0 t.j. All effects of factor A levels are equal to zero, thus insignificant, against alternative hypothesis H11 : i  0 for at least one i (i = 1,2…m) effect i of at least one i – level of factor A is significant, significantly different from zero

Hypothesis for factor B: Ho 2:  1 =  2 =…  j =  n = 0 => All effects of factor A levels are equal to zero, thus insignificant, against alternative hypothesis H12 :  j  0 for at least one j (j = 1,2…m) effect  j of at least one j – level of the factor B is significant, significantly different from zero doc.Ing. Zlata Sojková,CSc.

S1 s12 S2 s22 Sr sr2 Sc 2 Degrees of freedom 3 Mean square (MS) (1/2) 4 F - critical TAV Variability 1 Sum of squares (SS) Variability between rows S1 m-1 s12 Variability between columns S2 n-1 s22 Residual variability Sr sr2 (m-1)(n-1) Total variability Sc m.n -1

Decomposition of the total variability Sc= S1 + S2 + S r Variability between rows, effect of factor A Variability between columns, Effect of factor B Residual variability Total variability

Investigating the relationships between statistical attributes Investigating the relationship between qualitative attributes, e.g. AB , called measurement of the association Investigating the relationship between quantitative attributes - regression and correlation analysis

Inestigating the association Based on the association, resp. pivot tables For testing the existence of  significant relationship between qualitative signs we use 2 - test of independence H0: two signs A and B are independent H1: signs A and B are dependent Attribute A has m - levels, variations Attribute B has k - levels , variations

Hypotheses formulation Dependence of the attributes will appear in different frequency E.g. We examine wheter the size of the package is affected by the size of the family Ho : Choice of the package size does not depend on the count of family members H1 : Choice of package size is affected by the size of the family The procedure use comparing of empirical and theoretical frequencies, (what should be empirical frequencies, if the attributes A and B were independent

Simultanous frequencies, frequencies of the second order (aibj) Marginal frequencies (ai) resp.(bj) Size of the family Package size 1-2 3-4 5 < Total (b1) (b2) (b3) do 100g 25 37 8 70 (a1) (a1b1) (a1 b2) 100-150g 10 62 53 125 (a2) 250g < 5 41 59 105 (a3) (a3b3) Total 40 140 120 300 Total count of the respondents n

Determination of theoretical frequencies Based on the sentence about independence of the random events A and B: P(AB) = P(A) . P(B), thus signs A and B are independent, then: P(aibj) = P(ai) .P(bj) Estimate based on the relative frequencies: (aibj)o = (ai) . (bj)  (aibj)o = (ai) .(bj) n n n n Theoretical frequencies

Calculation of theoretical frequencies (a1b1)o = 70.40/300 = 9,33 Family size Package size 1-2 3-4 5 and < Total (b1) (b2) (b3) do 100g 25 37 8 70 (a1) 9.33 32,67 28.00 100-150g 10 62 53 125 (a2) 16.67 58.33 50 250g < 5 41 59 105 (a3) 14.00 49 42 Total 40 140 120 300 Total count of respondents n

Calculation of test criteria and decision: If 2 calculated  2 for significance  for degrees of freedom (m-1).(k-1)  Ho is rejected => signs A and B are dependent In our case it means, that number of the family members significantly affects choice of the package size. Further, we should measure strength (power) of the dependence.