Genotype x Environment Interactions Analyses of Multiple Location Trials.

Slides:



Advertisements
Similar presentations
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Advertisements

Kin 304 Regression Linear Regression Least Sum of Squares
Combined Analysis of Experiments Basic Research –Researcher makes hypothesis and conducts a single experiment to test it –The hypothesis is modified and.
Combined Analysis of Experiments Basic Research –Researcher makes hypothesis and conducts a single experiment to test it –The hypothesis is modified and.
Forecasting Using the Simple Linear Regression Model and Correlation
Other Analysis of Variance Designs Chapter 15. Chapter Topics Basic Experimental Design Concepts  Defining Experimental Design  Controlling Nuisance.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
11 Simple Linear Regression and Correlation CHAPTER OUTLINE
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Simple Linear Regression and Correlation
Correlation Correlation is the relationship between two quantitative variables. Correlation coefficient (r) measures the strength of the linear relationship.
Chapter 12 Simple Linear Regression
Some Terms Y =  o +  1 X Regression of Y on X Regress Y on X X called independent variable or predictor variable or covariate or factor Which factors.
ANOVA: ANalysis Of VAriance. In the general linear model x = μ + σ 2 (Age) + σ 2 (Genotype) + σ 2 (Measurement) + σ 2 (Condition) + σ 2 (ε) Each of the.
Statistics for Business and Economics
Chapter 3 Analysis of Variance
Linear Regression and Correlation
Lesson #32 Simple Linear Regression. Regression is used to model and/or predict a variable; called the dependent variable, Y; based on one or more independent.
Fall 2006 – Fundamentals of Business Statistics 1 Chapter 13 Introduction to Linear Regression and Correlation Analysis.
Chapter 3 Experiments with a Single Factor: The Analysis of Variance
Chapter 14 Conducting & Reading Research Baumgartner et al Chapter 14 Inferential Data Analysis.
SIMPLE LINEAR REGRESSION
Chapter Topics Types of Regression Models
Linear Regression and Correlation Analysis
Chapter 13 Introduction to Linear Regression and Correlation Analysis
1 4. Multiple Regression I ECON 251 Research Methods.
Quantitative Business Analysis for Decision Making Simple Linear Regression.
SIMPLE LINEAR REGRESSION
© 2000 Prentice-Hall, Inc. Chap Forecasting Using the Simple Linear Regression Model and Correlation.
Interpreting Bi-variate OLS Regression
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Correlation and Regression Analysis
Relationships Among Variables
Lecture 5 Correlation and Regression
Regression and Correlation Methods Judy Zhong Ph.D.
SIMPLE LINEAR REGRESSION
Introduction to Linear Regression and Correlation Analysis
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Regression Analysis (2)
Biostatistics Unit 9 – Regression and Correlation.
OPIM 303-Lecture #8 Jose M. Cruz Assistant Professor.
Plant Science 547 Biometrics for Plant Scientists Association Between Characters.
Correlation and Regression SCATTER DIAGRAM The simplest method to assess relationship between two quantitative variables is to draw a scatter diagram.
Introduction to Linear Regression
Soc 3306a Lecture 9: Multivariate 2 More on Multiple Regression: Building a Model and Interpreting Coefficients.
Multivariate Analysis. One-way ANOVA Tests the difference in the means of 2 or more nominal groups Tests the difference in the means of 2 or more nominal.
Multiple regression models Experimental design and data analysis for biologists (Quinn & Keough, 2002) Environmental sampling and analysis.
MBP1010H – Lecture 4: March 26, Multiple regression 2.Survival analysis Reading: Introduction to the Practice of Statistics: Chapters 2, 10 and 11.
Genotype x Environment Interactions Analyses of Multiple Location Trials.
1 11 Simple Linear Regression and Correlation 11-1 Empirical Models 11-2 Simple Linear Regression 11-3 Properties of the Least Squares Estimators 11-4.
Simple Linear Regression ANOVA for regression (10.2)
Simple linear regression Tron Anders Moger
Copyright © 2013, 2009, and 2007, Pearson Education, Inc. Chapter 14 Comparing Groups: Analysis of Variance Methods Section 14.3 Two-Way ANOVA.
Regression Analysis. 1. To comprehend the nature of correlation analysis. 2. To understand bivariate regression analysis. 3. To become aware of the coefficient.
Chapter 12 Simple Linear Regression n Simple Linear Regression Model n Least Squares Method n Coefficient of Determination n Model Assumptions n Testing.
Experimental Statistics - week 9
Lecture 10 Introduction to Linear Regression and Correlation Analysis.
Maths Study Centre CB Open 11am – 5pm Semester Weekdays
Genetics and Genetic Prediction in Plant Breeding.
Genotype x Environment Interactions Analyses of Multiple Location Trials.
Multivariate Transformation. Multivariate Transformations  Started in statistics of psychology and sociology.  Also called multivariate analyses and.
ENGR 610 Applied Statistics Fall Week 11 Marshall University CITE Jack Smith.
Introduction Many problems in Engineering, Management, Health Sciences and other Sciences involve exploring the relationships between two or more variables.
Stats Methods at IC Lecture 3: Regression.
Kin 304 Regression Linear Regression Least Sum of Squares
Fixed and Random Effects
Correlation and Simple Linear Regression
Regression Analysis Week 4.
Correlation and Simple Linear Regression
Presentation transcript:

Genotype x Environment Interactions Analyses of Multiple Location Trials

Previous Class  How many locations are sufficient.  Where should sites be located.  Assumptions of over site analyses.  Homoscalestisity of error variance.  Bartlett Test – Same d.f., Different d.f.  Transforming the data.  Analyses of variance table and EMS.

Interpretation

Interpretation  Look at data: diagrams and graphs  Joint regression analysis  Variance comparison analyze  Probability analysis  Multivariate transformation of residuals: Additive Main Effects and Multiplicative Interactions (AMMI)

Multiple Experiment Interpretation Visual Inspection  Inter-plant competition study  Four crop species: Pea, Lentil, Canola, Mustard  Record plant height (cm) every week after planting  Significant species x time interaction

Plant Biomas x Time after Planting

PeaLentil Mustard Canola

Legume Brassica

Joint Regression

Regression Revision  Glasshouse study, relationship between time and plant biomass.  Two species: B. napus and S. alba.  Distructive sampled each week up to 14 weeks.  Dry weight recorded.

Dry Weight Above Ground Biomass

Biomass Study S. alba B. napus

Biomass Study (Ln Transformation) S. alba B. napus

Mean x = 7.5; Mean y = SS(x)=227.5; SS(y)=61.66; SP(x,y)= Ln(Growth) = x Weeks se(b)=

B. napus Mean x = 7.5; Mean y = SS(x)=227.5; SS(y)=61.66; SP(x,y)= Ln(Growth) = x Weeks se(b)= Source df SS MS Regression *** Residual

S. alba Mean x = 7.5; Mean y = SS(x)=227.5; SS(y)=61.03; SP(x,y)= Ln(Growth) = x Weeks se(b)=

S. alba Mean x = 7.5; Mean y = SS(x)=227.5; SS(y)=61.03; SP(x,y)= Ln(Growth) = x Weeks se(b)= Source df SS MS Regression *** Residual

Comparison of Regression Slopes t - Test [b 1 - b 2 ] [se(b 1 ) + se(b 2 )/2] [( )/2] = 0.22 ns

Joint Regression Analyses

Y ijk =  + g i + e j + ge ij + E ijk ge ij =  i e j +  ij Y ijk =  + g i + (1+  i )e j +  ij + E ijk

Yield Environments a b c d

Joint Regression Example  Class notes, Table15, Page 229.  20 canola (Brassica napus) cultivars.  Nine locations, Seed yield.

Joint Regression Example

Westar = 0.94 x Mean Source df SS Regression 1 b 1 sp(x,y)/ss(x) = [sp(x,y)] 2 /ss(x) Residual 12 Difference = SS(Res) Total 13 ss(y)

Joint Regression Example Source df SSqMSq Regression *** Residual Westar = 0.94 x Mean

Joint Regression Example Bounty = 1.12 x Mean Source df SS Regression 1 b 1 sp(x,y)/ss(x) = [sp(x,y)] 2 /ss(x) Residual 12 Difference = SS(Res) Total 13 ss(y)

Joint Regression Example Source df SSqMSq Regression *** Residual Bounty = 1.12 x Mean

Joint Regression Example Source SS Heter. Reg ∑[SP(x,y i )] 2 /SS(x)-[∑SP(x,y i )] 2 /[SS(x)] 2 Residual ∑SS(Res i ) G x E 459.4

Joint Regression Example

Joint Regression ~ Example #2

Joint Regression C A B

Problems with Joint Regression  Non-independence - regression of genotype values onto site means, which are derived including the site values.  The x-axis values (site means) are subject to errors, against the basic regression assumption.  Sensitivity (  -values) correlated with genotype mean.

Problems with Joint Regression  Non-independence - regression of genotype values onto site means, which are derived including the site values.  Do not include genotype value in mean for that regression.  Do regression onto other values other than site means (i.e. control values).

Joint Regression ~ Example #2

Problems with Joint Regression  The x-axis values (site means) are subject to errors, against the basic regression assumption.  Sensitivity (  -values) correlated with genotype mean.

Addressing the Problems  Use genotype variance over sites to indicate sensitivity rather than regression coefficients.

Genotype Yield over Sites ‘Ark Royal’

Genotype Yield over Sites ‘Golden Promise’

Over Site Variance

Univariate Probability Prediction

Over Site Variance

Univariate Probability Prediction ƒ(µ¸A) T  A.

  TT  TT ƒ (  A  d  dA T  A. Univariate Probability Prediction

Environmental Variation  1 1 1 1  2 2 2 2 T

Use of Normal Distribution Function Tables |T – m|  g to predict values greater than the target (T) |m – T|  g to predict values less than the target (T)

The mean (m) and environmental variance (  g 2 ) of a genotype is 12.0 t/ha and , respectively (so  = 4). What is the probability that the yield of that given genotype will exceed 14 t/ha when grown at any site in the region chosen at random from all possible sites. Use of Normal Distribution Function Tables

T – m  g  = = = = 14 – 12 4 = Use of Normal Distribution Function Tables = 0.5 Using normal dist. tables we have the probability from -  to T is Actual answer is 1 – = (or 38.85% of all sites in the region).

Use of Normal Distribution Function Tables The mean (m) and environmental variance (  g 2 ) of a genotype is 12.0 t/ha and , respectively (so  = 4). What is the probability that the yield of that given genotype will exceed 11 t/ha when grown at any site in the region chosen at random from all possible sites.

T – m  g  = = = = 11 – 12 4 = Use of Normal Distribution Function Tables = Using normal dist. tables we have  (0.25) = , but because  is negative our answer is 1 – (1 – ) = or 60% of all sites in the region.

 Exceed the target; and (T-m)/  positive, then probability = 1 – table value.  Exceed the target; and (T-m)/  negative, then probability = table value.  Less than the target; and (m-T)/  positive, then probability = table value.  Less than target; and (m-T)/  negative, then probability = 1 – table value. Use of Normal Distribution Function Tables

Univariate Probability

Multivariate Probability Prediction T1T1--T1T1-- T2T2--T2T2-- TnTn--TnTn-- …. f (x 1,x 2,..., x n ) dx 1, dx 2,..., dx n

Problems with Probability Technique  Setting suitable/appropriate target values:  Control performance  Industry (or other) standard  Past experience  Experimental averages

 Complexity of analytical estimations where number of variables are high:  Use of rank sums Problems with Probability Technique

Additive Main Effects and Multiplicative Interactions AMMI  AMMI analysis partitions the residual interaction effects using principal components.  Inspection of scatter plot of first two eigen values (PC1 and PC2) or first eigen value onto the mean.

AMMI Analyses Y ijk =  + g i + e j + ge ij + E ijk

AMMI Analyses Y ijk -  - g i - e j - E ijk = ge ij

AMMI Analyses Y ijk -  - g i - e j - E ijk = ge ij ge 11 ge 12 ge 13 ….. ge 1n ge 21 ge 22 ge 23 ….. ge 2n... … …... ge i1 ge i2 ge i3 ….. ge in... … …... ge k1 ge k2 ge k3 ….. ge kn

AMMI Analysis Seed Yield G1 S4 S7 S3 S5 G4 S1 S6 G2 G3 S2

G1 S4 S7 S3 S5 G4 S1 S6 G2 G3 S2 AMMI Analysis Seed Yield

G1 S4 S7 S3 S5 G4 S1 S6 G2 G3 S2 AMMI Analysis Seed Yield

Time Square Chi-Square