Selection on an Index of Traits I= X b j z j =b T z æ 2 I =æ(b T z;b T z)=b T æ(z;z)b=b T Pbæ 2 A I =æ A (b T z;b T z)=b T æ A (z;z)b=b T Gb h 2 I = æ.

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

Ch 7.7: Fundamental Matrices
Linear Programming. Introduction: Linear Programming deals with the optimization (max. or min.) of a function of variables, known as ‘objective function’,
Linear Algebra Applications in Matlab ME 303. Special Characters and Matlab Functions.
Experimental Design, Response Surface Analysis, and Optimization
Boyce/DiPrima 10th ed, Ch 10.1: Two-Point Boundary Value Problems Elementary Differential Equations and Boundary Value Problems, 10th edition, by William.
Ch 9.1: The Phase Plane: Linear Systems
P M V Subbarao Professor Mechanical Engineering Department
MATH 685/ CSI 700/ OR 682 Lecture Notes
Visual Recognition Tutorial
President UniversityErwin SitompulModern Control 11/1 Dr.-Ing. Erwin Sitompul President University Lecture 11 Modern Control
Linear Transformations
Factor Analysis Purpose of Factor Analysis Maximum likelihood Factor Analysis Least-squares Factor rotation techniques R commands for factor analysis References.
Factor Analysis Purpose of Factor Analysis
Ch 7.8: Repeated Eigenvalues
Stochastic Differentiation Lecture 3 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO Working Group on Continuous.
example: four masses on springs
Ensemble Learning: An Introduction
Extensions of the Breeder’s Equation: Permanent Versus Transient Response Response to selection on the variance.
Lecture 7: Correlated Characters
Short-Term Selection Response
Math for CSLecture 11 Mathematical Methods for Computer Science Lecture 1.
Ch 3.3: Linear Independence and the Wronskian
1 Adjoint Method in Network Analysis Dr. Janusz A. Starzyk.
Techniques for studying correlation and covariance structure
Correlation. The sample covariance matrix: where.
Lecture 10A: Matrix Algebra. Matrices: An array of elements Vectors Column vector Row vector Square matrix Dimensionality of a matrix: r x c (rows x columns)
Separate multivariate observations
Adaptive Signal Processing
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
LINEAR PROGRAMMING SIMPLEX METHOD.
Rindsel: a R package for Selection Indices
Regression Method.
Principal Components Analysis BMTRY 726 3/27/14. Uses Goal: Explain the variability of a set of variables using a “small” set of linear combinations of.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
CORRELATION & REGRESSION
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
Some matrix stuff.
4 4.4 © 2012 Pearson Education, Inc. Vector Spaces COORDINATE SYSTEMS.
Ken YoussefiMechanical Engineering Dept. 1 Design Optimization Optimization is a component of design process The design of systems can be formulated as.
Linear Equations in Linear Algebra
1 1.7 © 2016 Pearson Education, Inc. Linear Equations in Linear Algebra LINEAR INDEPENDENCE.
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Nonlinear Programming Models
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Section 2.3 Properties of Solution Sets
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Probability and statistics review ASEN 5070 LECTURE.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
PBG 650 Advanced Plant Breeding
Variational and Weighted Residual Methods
An extension to Salop’s model Focused on variety differentiation: consumers differ on the most preferred variety Expands it to include quality differentiation:
Determining 3D Structure and Motion of Man-made Objects from Corners.
(COEN507) LECTURE III SLIDES By M. Abdullahi
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
Gene350 Animal Genetics Lecture September 2009.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
Breeding Value Estimation Chapter 7. Single information source What is the breeding value of this cow for milk production? A cow produces 9000 kg milk.
An Introduction to Linear Programming
CH 5: Multivariate Methods
A few Simple Applications of Index Selection Theory
Degree and Eigenvector Centrality
Chapter 5. The Duality Theorem
Lecture 16: Selection on Multiple Traits lection, selection MAS
Vector Spaces COORDINATE SYSTEMS © 2012 Pearson Education, Inc.
Presentation transcript:

Selection on an Index of Traits I= X b j z j =b T z æ 2 I =æ(b T z;b T z)=b T æ(z;z)b=b T Pbæ 2 A I =æ A (b T z;b T z)=b T æ A (z;z)b=b T Gb h 2 I = æ 2 A I æ 2 I = b T Gb b T Pb A common way to select on a number of traits at once is to base selection decisions on a simple index of trait values, The resulting phenotypic and additive-genetic variance for this synthetic trait are; This gives the resulting heritability of the index as

Class problem Enter G and P matrices from Example 33.1 in R. For Compute additive and phenotypic variance, and h 2 for the index

R I ={h 2 I æ I ={¢ b T Gb b T Pb p b T Pb={¢ b T Gb p b T Pb S j = { æ I X k b k P jk S= { æ I Pb Thus, the response in the index due to selection is How does selection on I translate into selection on the underlying traits? Since R = GP -1 S, the response vector for the underlying means becomes R=GP °1 S= { æ I Gb={¢ Gb p b T Pb -

Class problem For a selection intensity of i = 2.06 (upper 5% selected) compute the response in the index and the vectors of selection differentials and responses (S and R) for the vector of traits.

Changes in the additive variance of the index Bulmer’s equation holds for the index: We can also apply multivariate Bulmer to follow the changes in the variance-covariance matrix of the components:

The Smith-Hazel Index Suppose we wish the maximize the response on the linear combination a T z of traits This is done by selecting those individuals with the largest breeding value for this index, namely Hence, we wish to find a phenotypic index I = b T z So that the correlation between H and I in an Individual is maximized. Key: This vector of weights b is typically rather different from a!

The Smith-Hazel Index Smith and Hazel show that the maximal gain is obtained by selection on the index b T z where Thus, to maximize the response in a T z in the next generation, we choose those parents with the largest breeding value for this trait, H = a T g. This is done by choosing individuals with the largest values of I = b s T z.

In-class problem We wish to improve the index Compute the Smith-Hazel weights (using previous P, G) Compute the response for the SH index with truncation selection saving the upper 5% Compare this response to that selecting directly on I=a T z

Heritability index : b i = a i h 2 i Other indices Estimated index:Base-index: Elston (or weight-free) index: Retrospective index: since R = Gb, b = G -1 R

Problem with Smith-Hazel Require accurate estimate of P, G! Values of P, G change each generation from LD! Dealing with negative eigenvalues of H = P -1 G Bending:

Restricted and Desired-gains Indices Instead of maximizing the response on a index, we may (additionally) wish to restrict change in certain traits or have a desired gain in specific traits in mind Suppose for our soybean bean data, we wish no response in trait 1 Class Problem: Suppose weights are a T = (0,1,1), i.e, no weight on trait 1. Compute the response in trait one under this index.

Morely (1955): Change trait z 1 while z 2 constant Want b 1, b 2 such that no response in trait 2 Setting b 1 = 1 and solving gives weights Note sign typo in Equation 33.36

Kempthorne-Nordskog restricted index Suppose we wish the first k traits to be unchanged. Since response of an index is proportional to Gb, the constraint is CGb r =0, where C is an m X k matrix with ones on the diagonals and zeros elsewhere. Kempthorne-Nordskog showed that b r is obtained by where G r = CG

Tallis restriction index More generally, suppose we specify the desired gains for k combinations of traits, Here the constraint is CR = CGb r = d Solution is

Desired-gain indices Suppose we desire a gain of R, then since R = Gb, The desired-gains index is b = G -1 R

Class problem Compute b for desired gains (in our soybean traits) of (1,-2,0)

Non-linear indices Care is required in considering the improvement goals when using a nonlinear merit function, as apparently subtle differences in the desired outcomes can become critical A related concern is whether we wish to maximize the additive Genetic value in merit in the parents or in their offspring. Again, with a linear index these are equivalent, as the mean breeding value of the parents u equals the mean value in their offspring. This is not the case with nonlinear merit.

Quadratic selection index

Linearization

Fitting the best linear index to a nonlinear merit function

Sets of response vectors and selection differentials Given a fixed selection intensity i, we would like to know the set of possible R (response vectors) or S (selection vectors) Substituting the above value of b into recovers Hence, This equation describes a quadratic surface of possible, R values, Similarly,

Key: Optimal weights a function of intensity of selection

Sequential Approaches for Multitrait selection Tandem Selection Independent Culling Selection of extremes Index selection generally optimal, but may be economic reasons for other approaches

Multistage selection Selecting on traits that appear in different life stages Optimal selection schemes Cotterill and James optimal 2-stage. Select 1st on x, (save p 1 ), then then on y (save p 2 = p/p 1 ). Goal optimize the breeding value for merit g.