Download presentation
Presentation is loading. Please wait.
1
Spherical Cows Grazing in Flatland: Constraints to Selection and Adaptation Bruce Walsh (jbwalsh@u.arizona.edu) University of Arizona Mark Blows University of Queensland
2
Geometry and Biology Fisher's (1918) original orthogonal variance decomposition D'Arcy Thompson (1917) On Growth and Form Fisher's (1930) geometric model for the adaptation of new mutations Wright (1932)-Simpson (1944) concept of a phenotypic adaptive topography Lande-Arnold (1983) estimation of quadratic fitness surfaces Geometry has a long and important history in biology
3
A “spherical cow” -- an overly-simplified representation of a complex geometric structure When considering adaptation, the appropriate geometry of the multivariate phenotype (and the resulting vector of breeding values) needs to be used, otherwise we are left with a misleading view of both selection and adaptation.
4
Geometric models for the adapativeness of new mutations R. A. Fisher Fisher (1930) suggested that the number of independent traits under selection has important consequences for adaptation Fisher used a fairly simple geometric argument to make this point One of the first considerations of the role of geometry in evolution is Fisher’s work on the probability that a new mutation is adaptive (has higher fitness than the wildtype from which it is derived)
5
z Fitness contour for wildtype Phenotype of mutant r d The (2-D) geometry behind Fisher’s model wildtype is here Optimal (highest) Fitness value in phenotypic space New phenotypes for a random mutation that are a (random) distance r from the wildtype The probability the new mutation is adaptive is simply the fraction of the arc of the circle inside of the fitness contour of the starting phenotype. Function of r, d, and n d = distance between z and
6
Fisher asked if we have a mutation that randomly moves a distance r from the current position, what is the chance that an advantageous mutation (increased fitness) occurs. p fav = 1 p 2º Z 1 x exp(°y 2 =2)dy=1°erf(x) If there are n traits under selection, Fisher showed that this probability is given by x= r p n 2d where Note that p decreases as x increases. Thus, increasing n results in a lower chance of an adapative mutation
7
0.5 0.4 0.3 0.2 0.1 0.0 Prob(Adaptive mutation) 00.51.01.52.0 2.53.0 p fav =1°erf(x)=1° µ r p n 2d ∂ r n 1/2 / [2d]
8
M. Kimura A. Orr Extension’s of Fisher’s model S. Rice
9
Kimura and Orr offered an important extension of Fisher’s model: Fisher simply consider the probability that the mutation was favorable The more relevant issue is the chance that the new mutation is fixed. Favorable mutations might be rarer, but have higher probability of fixation. For example, as r -> 0, Prob(Favor) -> 0.5, but s -> 0, and probability (fixation) -> neutral value (1/2N) Orr showed that the optimal mutation size was x ~ 0.925, or r opt '1:85¢ d p n
10
Orr further showed that there is a considerable cost to complexity (dimensions of selection n) with the rate of adaptation (favorable mutation rate times fixation probability) declining significantly faster that 1/n. Thus, the constraint on dimensionality may be much more severe than originally suggested by Fisher.
11
z Fitness contour for wildtype Phenotype of mutant r d Two spherical cow assumptions! Equal (and spherical) fitness contours for all traits Equal (and spherical) distribution of mutational effects Fisher’s model makes simplifying geometric assumptions
12
Rice significantly relaxes the assumption of a spherical fitness surface around a single optimal value The probability of adaptation on these surfaces depends upon their ``effective curvature'', roughly the harmonic mean of the individual curvatures. Recalling that the harmonic mean is dominated by small values, it follows that the probability of adaptation is likewise dominated by those fitness surfaces with low curvature (weak selection). However, on such surfaces, s is small, and hence the fixation probability small.
13
Multivariate Phenotypes and Selection Response Now let’s move from the geometry of adaptive mutations to the evolution of a vector of traits, a multivariate phenotype For univariate traits, the classic breeders’ equation R= h 2 S relates the within-generation change S in mean phenotype to the between-generation change R (the response to selection)
14
Russ Lande The Multivariate Breeders’ Equation R=GP °1 S Lande (1979) extended the univariate breeders’ equation R = h 2 S to the response for a Vector R of traits R = Var(A) Var -1 (P) S Defining the selection gradient by Ø=P °1 S yields the Lande Equation R=GØ
15
w=a+ n X i=1 Ø i z i +e i The selection gradient Robertson & Price showed that S = Cov(w,z), so that the selection differential S is the covariance between (relative) fitness and phenotypic value Ø=P °1 S Since S is the vector of covariances and P the covariance matrix for z, it follows that is the vector of regression coefficients for predicting fitness w given phenotypes z i, e.g.,
16
G, , and selective constraints The selection gradient measures the direction that selection is trying to move to population mean to maximally improve fitness A non-zero i means that selection is acting directly to change the mean of trait i. Multiplying by G results in a rotation (moving away from the optimal direction) as well as a scaling (reducing the response). Thus, G imposes constraints in the selection response,
17
Thus G and both describe something about the geometry of selection The vector is the optimal direction to move to maximally increase fitness The covariance matrix G of breeding values describes the space of potential constraints on achieving this optimal response Treating this multivariate problem as a series of univariate responses is incredibly misleading
18
Edwin Abbott Abbott, writing as A Square, 1884 The problems working with a lower- dimensional projection from a higher-dimensional space
19
The misleading univariate world of selection For a single trait, we can express the breeders’ equation as R = Var(A)* . Consider two traits, z 1 and z 2, both heritable and both under direct selection Suppose 1 = 2, 2 =-1, Var(A 1 ) = 10, Var(A 2 ) = 40 One would thus expect that each trait would respond to selection, with R i = Var(A i )* i
20
R=GØ= µ 100 040 ∂µ 2 °1 ∂ = µ 20 °40 ∂ R=GØ= µ 1020 40 ∂µ 2 °1 ∂ = µ 0 0 ∂ What is the actual response? Not enough information to tell --- need Var(A 1, A 2 ). However, with a different covariance,
21
Dickerson (1955) -- genetic variation in all of the components of a selection index, but no (additive) variation in the index itself. Singularity of G: Certain combinations of traits show no additive variance The notion of multivariate constraints is not new Lande also noted the possibility of constraints There can be both phenotypic and genetic constraints Singularity of P: Selection cannot independently act on all components
22
cos( )= x T y jjx y One simple measure is the angle between the vectors of desired ( ) and actual (R) responses If the covariance matrix is not singular, how can we best quantify its constraints (if any) Recall that the angle between two vectors x and y is simply given by If the inner product of and R is zero, = 90 o, and there is an absolute constraint. If = 0 o, the response and gradient point in exactly the same direction ( is an eigenvector of G)
23
=cos °1 √ R T Ø jjR Ø ! The plot is for the first of our examples, where G = µ 100 040 ∂ Note here that = 37 o, even thought there is no covariance between traits and hence this reduces to two univariate responses. Ø= µ 2 °1 ∂ The constraint arises because much more genetic variation in trait 2 (the weaker-selected trait)
24
Constraints and Consequences Thus, it is theoretically possible to have a very constrained selection response, in the extreme none (G is a zero eigenvalue and is an associated eigenvector) This is really an empirical question. At first blush, it would seem incredibly unlikely that “just happens” to be near a zero eigenvector of G However, selection tends to erode away additive variation for a trait under constant selection Drosophila serrata
25
Stephen ChenowethEmma Hine Empirical study from Mark’s lab: Cuticular hydrocarbons and mate choice in Drosophila serrata
26
Cuticular hydrocarbons D. serrata
27
For D. serrata, 8 cuticular hydrocarbons (CHC) were found to be very predictive of mate choice. Laboratory experiments measured both for this vector of 8 traits as well as the associated G matrix. While all CHC traits had significant heritabilities, the covariance matrix was found to be ill-conditioned, with the first two eigenvalues (g 1, g 2 ) accounting for roughly 78% of the total genetic variation. Computing the angles between each of these two eigenvalues and provides a measure of the constraints in this system.
28
g 1 = 0 B B B B B B B B B @ 0:232 0:132 0:255 0:536 0:449 0:363 0:430 0:239 1 C C C C C C C C C A g 2 = 0 B B B B B B B B B @ 0:319 0:182 0:213 °0:436 0:642 °0:362 °0:014 °0:293 1 C C C C C C C C C A Ø= 0 B B B B B B B B B @ °0:099 °0:055 0:133 °0:186 °0:133 0:779 0:306 °0:465 1 C C C C C C C C C A (g 1, ) = 81.5 o (g 2, ) = 99.7 o Thus much (at least 78%) of the usable genetic variation is essentially orthogonal to the direction that selection is trying to move the population.
29
Evolution along “Genetic lines of least resistance” Schluter (1996) suggested that we can, as he observed that populations tend to diverge along the direction given by the first principal component of G (its leading eigenvector) Assuming G remains (relatively) constant, can we relate population divergence to any feature of G? Schluter called this evolution along “genetic lines of least resistance”, noting that populations tend to diverge in the direction of g max, specifically the angle between the vector of between-population divergence in means and g max was small.
30
π(t)ªMVN µ π; t 2N e ¢G ∂ There are two ways to interpret Schluter’s observation. Evolution along g max (ii) such lines are also the directions on which maximal genetic drift is expected to occur (i) such lines constrain selection, with departures away from such directions being difficult Under a simple Brownian motion model of drift in the vector of means is distributed as, Maximal directions of change correspond to the leading eigenvectors of G.
31
Looking at lines of least resistance in the Australian rainbow fish (genus Melanotaenia )
32
Katrina McGuigan Megan Higgie
33
Two sibling species were measured, both of which have populations differentially adapted to lake vs. stream hydrodynamic environments The vector of traits were morphological landmarks associated with overall shape (and hence potential performance in specific hydrodynamic environments) Here, there was no to estimate, rather the divergence vector d between the mean vector for groups (e.g., the two species, the two environments within a species, etc.) To test Schluter’s ideas, the angle between g max and different d’s we computed.
34
Divergence between species, as well as divergence among replicate hydrodynamic populations within each species, followed Schluter's results (small angular departures from the vector d of divergent means and g max ). However, hydrodynamic divergence between lake versus stream populations within each species were along directions that were quite removed from g max (as well as the other eigenvectors of G that described most of the genetic variation). Thus, the between- and within-species divergence within the same hydrodynamic environment are consistent with drift, while hydrodynamic divergence within each species had to occur against a gradient of very little genetic variation. One cannot rule out that the adaptation to these environments resulted in a depletion of genetic variation along these directions. Indeed, this may indeed be the case.
35
Beyond g max : Using Matrix Subspace Projection to Measure Constraints Schluter’s idea is to examine the angle between the leading eigenvector of G and the vector of divergence More generally, one can construct a space containing the first k eigenvalues, and examine the angle between the projection of onto this space and This provides a measure on the constraints imposed by a subset of the useable variation
36
An advantage of using a subspace projection is that G is often ill-conditioned, in that max / min is large. In such cases (as well as others!) estimation of G may result in estimates of eigenvalues that are very close to zero or even negative. Negative estimates arise due to sampling (Hill and Thompson 1978), but values near zero may reflect the true biology in that there is very little variation in certain dimensions.
37
One can extract (estimate) a subspace of G that accounts for the vast majority of useable genetic variation by, for example, taking the leading k eigenvectors. In such cases, most of the genetic variation resides on a lower-dimensional subspace. It is often the case that G contains several eigenvalues whose associated eigenvectors account for almost no variation (i.e, max / tr(G) ~ 0).
38
P roj =A ° A T A ¢ °1 A T p=P r Ø=A ° A T A ¢ °1 A T Ø A=(g 1 ;g 2 ;¢¢¢;g k ) To do this, first construct the matrix A of the first k eigenvalues The projection matrix for this subspace is given by Thus, the projection of into this subspace is given by the vector Note that this is the generalization of the projection of one vector onto another
39
The constraints imposed within this subspace is given by the angle between p, the projection of into this space, and . For the Drosophia serrata CHC traits involved in mate choice., the first two eigenvalues account for roughly 80\% of the total variation in G. The angle between and the projection p of into the subspace of the genetic variance is 77.1 o Thus the direction of optimal response is 77 o away from the genetic variation described by this subspace (which spans 78% of the total variance).
40
How typical is this amount of constraint? Anna Van Homrigh Looked at 9 CHC involved in mate choice in Drosophila bunnanda The estimated G for these traits had 98% of the total genetic variation in the first five PCs (the first four had 95% of the total variance). The angle between and its projection into this 5-dimensional subspace was 88.2 o. If the first four PCs were considered for the subspace, the projection is even more constrained, being 89.1 o away for . When the entire space of G is considered, the resulting angle between R and is 67 o
41
Evolution Under Constraints or Evolution of Constraints? G both constrains selection and also evolves under selection. Over short time scales, if most alleles have modest effects, G changes due to selection generating linkage disequilibrium. ¢G=°GØØ T G=°RR T The within-generation change in G under the infinitesimal model is
42
¢G ij =¢æ(A i ;A j )=°R i R j Thus, the (within-generation) change in G between traits i and j is The net result is that linkage disequilibrium increases any initial constraints. A simple way to see this is to consider selection on the index I = z i i Selection on this index (which is the predicted fitness) results in decreased additive variance in this composite trait (Bulmer 1971).
43
Thus, as pointed out by Shaw et al. (1995), if one estimates G by first having several generations of random mating in the laboratory under little selection, existing linkage disequilibrium decays, and the resulting estimated G matrix may show less of a constraint than the actual G operating in nature (with its inherent linkage disequilibrium).
44
It is certainly not surprising that little usable genetic variation may remain along a direction of persistence directional selection. Why so much variation? What is surprising, however, is that considerable genetic variation may exist along other directions. The quandary is not why is there so little usable variation but rather why is their so much?
45
Quantitative genetics is in the embarrassing position as a field of having no models that adequately explain one of its central observations -- genetic variation (measured by single-trait heritabilities) is common and typically in the range of 0.2 to 0.4 for a wide variety of traits. As Johnson and Barton (2005) point out, the resolution of these issues likely resides in more detailed considerations of pleiotropy, wherein new mutations influence a number of traits (back to Fisher’s model!)
46
Once again, it is likely we need to move to a higher dimensional space to reasonably account for observations based on a projection into one dimension (i.e., standing heritability levels for a trait). The final consideration with pleiotropy is not just the higher-dimensional fitness surface for the vector of traits they influence but also the distributional space of pleiotropic mutations themselves.
47
Is the covariance structure G itself some optimal configuration for certain sets of highly-correlated traits? The “deep” nature of G Has there been selection on developmental processes to facilitate morphological integration (the various units of a complex trait functioning smoothly together), which in turn would result in constraints on the pattern of accessible mutations under pleiotropy (Olson and Miller 1958, Lande 1980)?
48
Developmental systems are networks
49
First, they are small-world graphs, which means that the mean path distance between any two nodes is short. The members live in a small world (Bacon, Erdos numbers) The second feature that studied regulatory/ metabolic networks showed is that the degree distribution (probability distribution that a node is connected to k other others) follows a power law P(k) ~ k - Graphs with a power distribution of links are called scale-free graphs. Scale-free graphs show they very important feature that they are fairly robust to perturbations. Most randomly-chosen nodes can be removed with little effect on the system. Some apparently general features of Biological networks
50
Our spherical cow may in reality have a very non-spherical distribution of new mutation phenotypes around a current phenotype.
51
z Geometry of the fitness surface and geometry of the mutational space Effects of selection removing variation (geometry of the fitness surface) Residual variation: constraints and usable evolutionary fuel (geometry of the subspace of usable variation relative to direction of selection Raw material Filter Fuel
52
“For someone learning the trade of quantitative genetics in the late 1980's, Stuart's work was like a beacon of interest in a sea of allozymes; incisive reviews, classic experimental designs (even with allozymes!), and above all the innovative application of quantitative genetics to important and interesting questions in evolutionary biology.” -- Mark Blows Stuart Barker
55
z z
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.