Download presentation
Presentation is loading. Please wait.
1
OVERVIEW OF LINEAR MODELS
Consider the following linear model y=X๐ท + e y= vector of observations ๐ท = vector of parameters of the fixed effects e= vector of random residuals X= design (incidence) matrix that relates observations to fixed effects.
2
ORDINARY LEAST SQUARES (OLS)
Assumes the residuals are homoscedastic and uncorrelated, for all i and for all i โ j. For the vector of residuals is The OLS estimates of ๐ท is the b vector that minimizes the residual sum of squares The unweighted sum of squared of residuals is minimized
3
ORDINARY LEAST SQUARES (OLS)
Taking derivatives, the estimates are If the residuals follow a multivariate normal distribution with then he OLS estimates are also the maximum likelihood estimates (ML)
4
ORDINARY LEAST SQUARES (OLS)
If is singular those estimates still hold when a generalized inverse is used However, only certain linear combinations of fixed effects can be estimated.
5
GENERALIZED LEAST SQUARES (GLS)
When residuals errors are heteroscedastic and/or correlated, OLS estimates of regression parameters and standard errors are potentially biased A more general regression analysis use the covariance matrix of the vector or residuals as Lack if independence of the residuals is indicated by the presence of non-zero off-diagonal elements of R and heteroscedasticity is indicated by differences in the diagonal elements of R
6
GENERALIZED LEAST SQUARES (GLS)
Weighted least squares takes these complications into account
7
Best Linear Unbiased Predictor BLUP
BLP assumes that the fixed effects are known when in practice they are never known and must be estimated from the data. The example of fixed effects in plant and animal breeding are spurious effects associated with blocks, locations, year, treatment, etc. However, some genetic effects such as selection generation, varieties, and seed source may also be assumed fixed. BLUP simultaneously estimate the fixed effects and the breeding values (random effects).
8
Best Linear Unbiased Predictor
Henderson (1949) developed the theory of BLUPs by which fixed effects and breeding values can be simultaneously estimated. The properties of BLUP are similar to those of BLP and BLUP reduces to BLP when no adjustment for environmental factors are needed. The properties of BLUP are incorporated in the name -- BLUP.
9
Best Linear Unbiased Predictor
Best โ it maximizes the correlation between the true (a) and the predicted (รข) breeding value. Linear โ predictor are linear function of observations. Unbiased โ E(รข)= a Predictor โ The prediction of the true breeding value โ
10
THE GENERAL MIXED MODEL Best Linear Unbiased Predictor
Consider the following linear mixed model y=X๐ท + Zu + e y= vector of observations ๐ท = vector of levels of fixed effects u= vector of levels or random effects e= vector of random residuals X= design (incidence) matrix that relates observations to fixed effects. Z=design (incidence) matrix that relates observations to random effects.
11
Best Linear Unbiased Predictor
Expectation of u and e By definition E(u)=E(e)=0 and E(y)=X๐ท and Variance of u and e Var(e)=I๏ณ2e=R assumed i.i.d and include random environmental and non-additive genetic effects. Var(u)=A๏ณ2e=G where A is the numerical relationship matrix. Covariance between u and e Cov(e,u)=Cov(a,u)=0
12
Best Linear Unbiased Predictor
Expectation of y E(y)=X๐ท Variance of y Var(y)=V=Var(Zu + e)=ZVar(u)Z/ +Var(e)+ Cov(Zu,e)+Cov(e,Zu) = ZGZ/ + R + ZCov(u,e) + Cov(e,u)Z/ Since Cov(e,u)=Cov(u,e)=0 then V=ZGZ/ +R
13
Best Linear Unbiased Predictor
Covariance between (y,u) and (y,e) Cov(y,u)=Cov(Zu+e,u)=Cov(Zu,u)+Cov(e,u) =ZCov(u,u)=ZG Cov(y,e)=Cov(Zu+e,e)=Cov(Zu,u)+Cov(e,e) =ZCov(u,e)+Cov(e,e) = R
14
Best Linear Unbiased Predictor
The problem with y=X๐ท + Zu + e is to predict a linear function of ๐ท and u The predictor is selected such that is unbiased and has minimum prediction error variance (PEV)
15
Best Linear Unbiased Predictor
This minimization leads to the BLUP of u and The BLUE is the generalized least square solution (GLS) for BLUE is the estimate of the linear functions of fixed effects, that has minimum sampling variance, is unbiased and is based on linear function of the data.
16
Best Linear Unbiased Predictor
The BLUP is similar to the conditional expectation of u given y under the assumption of MVN As noted, the practical application of the expression of BLUE and BLUP require that the variance components be known. Thus, prior to the BLUP analysis the variance components need to be estimated by ML or REML.
17
Best Linear Unbiased Predictor
Note that the solution to the BLUE and the BLUP requires the inverse of the covariance matrix V. When y has many thousands of observations as is commonly the case in animal and plant breeding, the computation of V-1 can be very difficult. Henderson offers a solution by proposing a more compact method for jointly obtaining the and the in he MME
18
MME Advantages Matrices to invert R and G are trivial if they are diagonal and thus submatrices in the MME are easier to compute than V-1 Dimensionality of the matrices on the left, needed to get the solution are much less dimension than the dimension of matrix V
19
MME If R-1 is an identity matrix it can be factorized from both sides of the MME such that MME may not be of full rank due to dependency in the matrix for fixed environmental effects. It may be necessary to set some levels of the fixed effects to zero When there is dependency to obtain solutions to the MME
20
Assumption for y=X๐ท + Zu + e
Distributions of y, u, and e are MVN implying the traits are determined by many additive genes of infinitesimal effects at many infinitely unlinked loci (infinitesimal model). The variance-covariance R and G for the base population are assumed to be known. In practice are never known but assuming an infinitesimal model they can be estimated by REML. MME can account for selection.
21
The generalized inverse of the coefficient matrix of the MME
---- Accuracy of evaluation --- Sampling variance of ๐ท and prediction error of u The generalized inverse of the coefficient matrix of the MME provides information on the Sampling variance of ๐ท Prediction error variance of u
22
Sampling variance of ๐ท =V( ๐ท โ๐ท)=C11 ๐ ๐ 2
---- Accuracy of evaluation --- Sampling variance of ๐ท and prediction error of u Sampling variance of ๐ท =V( ๐ท โ๐ท)=C11 ๐ ๐ 2 Variance of the prediction error (PEV) of u =V( u โu)=C22 ๐ ๐ 2
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.