Matrix models for population management and conservation 24-28 March 2012 Jean-Dominique LEBRETON David KOONS Olivier GIMENEZ.

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Ch 7.7: Fundamental Matrices
Applied Informatics Štefan BEREŽNÝ
MDOF SYSTEMS WITH DAMPING General case Saeed Ziaei Rad.
OCE301 Part II: Linear Algebra lecture 4. Eigenvalue Problem Ax = y Ax = x occur frequently in engineering analysis (eigenvalue problem) Ax =  x [ A.
Mathematics. Matrices and Determinants-1 Session.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
Psychology 202b Advanced Psychological Statistics, II January 25, 2011.
Computer Graphics Recitation 5.
Multivariable Control Systems Ali Karimpour Assistant Professor Ferdowsi University of Mashhad.
5. Topic Method of Powers Stable Populations Linear Recurrences.
Chapter 3 Determinants and Matrices
458 Age-structured models (continued): Estimating from Leslie matrix models Fish 458, Lecture 4.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Finding Eigenvalues and Eigenvectors What is really important?
MOHAMMAD IMRAN DEPARTMENT OF APPLIED SCIENCES JAHANGIRABAD EDUCATIONAL GROUP OF INSTITUTES.
The Final Bugbox Model Let L t be the number of larvae at time t. Let P t be the number of pupae at time t. Let A t be the number of adults at time t.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 6. Eigenvalue problems.
Stats & Linear Models.
Chapter 7 Matrix Mathematics Matrix Operations Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Linear Algebra With Applications by Otto Bretscher. Page The Determinant of any diagonal nxn matrix is the product of its diagonal entries. True.
What ’s important to population growth? A bad question! Good questions are more specific Prospective vs. retrospective questions A parameter which does.
Projection exercises Stable age distribution and population growth rate Reproductive value of different ages Not all matrices yield a stable age distribution.
Presented by Johanna Lind and Anna Schurba Facility Location Planning using the Analytic Hierarchy Process Specialisation Seminar „Facility Location Planning“
Chapter 10 Review: Matrix Algebra
Compiled By Raj G. Tiwari
Sundermeyer MAR 550 Spring Laboratory in Oceanography: Data and Methods MAR550, Spring 2013 Miles A. Sundermeyer Linear Algebra & Calculus Review.
Eigenvalue Problems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. The second main.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Little Linear Algebra Contents: Linear vector spaces Matrices Special Matrices Matrix & vector Norms.
CHAPTER 2 MATRIX. CHAPTER OUTLINE 2.1 Introduction 2.2 Types of Matrices 2.3 Determinants 2.4 The Inverse of a Square Matrix 2.5 Types of Solutions to.
STRUCTURED POPULATION MODELS
Demographic matrix models for structured populations
11-3 Simplifying Radical Expressions Standard 2.0 One Property.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
The Leslie Matrix How do we take the differences in survivorship and fecundity into account to ‘project’ the future of the population? The mechanism is.
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Mathematical foundationsModern Seismology – Data processing and inversion 1 Some basic maths for seismic data processing and inverse problems (Refreshement.
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 02 Chapter 2: Determinants.
Chapter 5 MATRIX ALGEBRA: DETEMINANT, REVERSE, EIGENVALUES.
A Review of Some Fundamental Mathematical and Statistical Concepts UnB Mestrado em Ciências Contábeis Prof. Otávio Medeiros, MSc, PhD.
Solving Linear Systems Solving linear systems Ax = b is one part of numerical linear algebra, and involves manipulating the rows of a matrix. Solving linear.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Copyright © Cengage Learning. All rights reserved. 2 SYSTEMS OF LINEAR EQUATIONS AND MATRICES.
3.4 Introduction to Eigenvalues
Similar diagonalization of real symmetric matrix
The rule gives a neat formula for solving a linear system A bit of notation first. We denote by the square matrix obtained by replacing the i-th column.
Matrix Population Models for Wildlife Conservation and Management 27 February - 5 March 2016 Jean-Dominique LEBRETON Jim NICHOLS Madan OLI Jim HINES.
11-1 Simplifying Radical Expressions. Product Properties for Radicals For any nonnegative real numbers x and y. a. b.
Matrix models for population management and conservation March 2012 Jean-Dominique LEBRETON David KOONS Olivier GIMENEZ.
Matrix models for population management and conservation March 2012 Jean-Dominique LEBRETON David KOONS Olivier GIMENEZ.
Matrix Population Models for Wildlife Conservation and Management 27 February - 5 March 2016 Jean-Dominique LEBRETON Jim NICHOLS Madan OLI Jim HINES.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Matrices. Variety of engineering problems lead to the need to solve systems of linear equations matrixcolumn vectors.
MATRICES A rectangular arrangement of elements is called matrix. Types of matrices: Null matrix: A matrix whose all elements are zero is called a null.
Review of Matrix Operations
Chapter 7 Matrix Mathematics
Matrices and vector spaces
Eigenvalues and Eigenvectors
Degree and Eigenvector Centrality
SKTN 2393 Numerical Methods for Nuclear Engineers
Maths for Signals and Systems Linear Algebra in Engineering Lectures 10-12, Tuesday 1st and Friday 4th November2016 DR TANIA STATHAKI READER (ASSOCIATE.
Linear Algebra Lecture 29.
ECE 576 POWER SYSTEM DYNAMICS AND STABILITY
Linear Algebra: Matrix Eigenvalue Problems – Part 2
Chapter 2 Determinants.
Presentation transcript:

Matrix models for population management and conservation March 2012 Jean-Dominique LEBRETON David KOONS Olivier GIMENEZ

Lecture 3 Matrix model theory Hal CASWELL, showing a matrix model to a Laysan Albatross. Hal’s book ( Matrix models, Sinauer, 2001) can be used both as a textbook and as a comprehensive reference.

From numerical to formal results v2v2 v1v1 X = 1.05 M V = V M t N 0   (N 0 ) t V asymptotically v 1 V = v 2 … in loose notation

From numerical to formal results t = … M t = … M t./M t-1 = … Termwise division

From numerical to formal results t = … M t = … M t./M t-1 = … Termwise division M t = M M t-1  M t-1, similar to M V = V  M t-1 (and M t ) have columns  proportional to V M t  t u 1 V u 2 V = t VU’ with U’ = u 1 u 2 … in loose notationtranspose

Transposition and matrix product u 1 the transpose of U= is U’= u 1 u 2 u 2 v 1 v 1 u 1 v 1 u 2 if V =, V U’ =, a 2 x 2 matrix v 2 v 2 u 1 v 2 u 2 while U’V = v 1 u 1 + v 2 u 2 is a 1 x 1 matrix, i.e. a scalar, also denoted as  u i v i

From numerical to formal results M t  t u 1 V u 2 V = t V U’ Or, equivalently and more rigorously  -t M t  V U’  u i >0, v i >0 -(t+1) M t+1 = -1 M -t M t  -1 M V U’ = V U’ = -t M t -1 M   V U’  -1 M Hence V U’  -1 M = V U’ Premutiply by U’ and simplify by scalar U’V, to get: U’  M =  U’

Of eigenvalues and eigenvectors M V = V eigenvalue and right eigenvector U’ M = U’ eigenvalue and left eigenvector  -t M t  V U’ leads to M t N 0  t V (U’ N 0 ) asymptotic exponential growth Scalar, weighting the components of N 0 by the u i = « reproductive values » Demographic ergodicity

Of eigenvalues and eigenvectors M V = V eigenvalue and right eigenvector U’ M = U’ eigenvalue and left eigenvector  -t M t  V U’ leads to M t N 0  t V (U’ N 0 ) asymptotic exponential growth Scalar, weighting the components of N 0 by the u i = « reproductive values » Demographic ergodicity, U, V are dispositional properties

Of eigenvalues and eigenvectors M V = V eigenvalue and right eigenvector U’ M = U’ eigenvalue and left eigenvector  -t M t  V U’ leads to M t N 0  t V (U’ N 0 ) asymptotic exponential growth Scalar, weighting the components of N 0 by the u i = « reproductive values » Usually, no formulas, but easy to get numerically

Reproductive values M V = V eigenvalue and right eigenvector U’ M = U’ eigenvalue and left eigenvector  -t M t  V U’ leads to M t N 0  t V (U’ N 0 ) asymptotic exponential growth Scalar, weighting the components of N 0 by the u i = « reproductive values »

Why is it so? These results do not hold for all matrices M is such that M t, for t large enough, has all its terms >0 … because M is a primitive, non negative, irreducible matrix

Why is it so? n x n Matrices have (in general) n eigenvalues which are complex numbers M =

M = Why is it so? n x n Matrices have (in general) n eigenvalues which are complex numbers

Why is it so? However, positive, nonnegative irreducible matrices … M =

Why is it so? However, positive, nonnegative irreducible matrices have their largest modulus eigenvalue which is a positive real number M=

Why is it so? However, positive, nonnegative irreducible matrices have their largest modulus eigenvalue which is a positive real number M= In products such as M t, this “dominant eigenvalue” tends to outweigh the influence of other eigenvalues. i.e., when t   M t N(0)   (0) t V

Of eigenvalues and eigenvectors 1 0 … … 0 0 Eigenvalues are the roots of det (M – … ) = … … 0 1 General Numerical Analysis software (Matlab, Mathematica…) or specialized software (ULM…) will get eigenvalues and eigenvectors for you. Usually, no formulas, but easy to get numerically

Of eigenvalues and eigenvectors The largest root of pf 1 -  pf 2 det ( ) = 2 - (pf 1 +q 2  + pf 1 q 2 - pf 2 q 1 = 0 q 1 q 2 - pf 1 q 2 +  (pf 1 +q 2  2  -4 (pf 1 q 2 - pf 2 q 1 ) is 2

Of eigenvalues and eigenvectors pf 1 q 2 +  (pf 1 +q 2  2  -4 (pf 1 q 2 - pf 2 q 1 ) 2 Even when there is a formula, is not a linear or simple function of the parameters

Of eigenvalues and eigenvectors pf 1 q 2 +  (pf 1 +q 2  2  -4 (pf 1 q 2 - pf 2 q 1 ) 2 Yet, we need to know how varies when one or several parameter values change Even when there is a formula, is not a linear or simple function of the parameters

Sensitivity analysis M =  = What if swallows were not nesting at age 1 ? M =  =

Sensitivity analysis What if ? M  M h = (1-h)M MV= V  (1-h)MV=(1-h) V Hence M h V =(1-h) V   h = (1-h), asymptotic structure V unchanged If you harvest each year 30 % of a roe deer population whose growth rate is 40 % ( =1.4), h is 1.4*(1-0.3) = 0.98, i.e. the population will drop at a rate of 2 % per year What if we harvest a proportion h of a population?

Sensitivity Analysis (  0 ) 00  0 +  (  0 +  ) (  0 )+  In more general cases, can be approximated by a linear function Generic parameter 

Sensitivity Analysis (  0 ) 00  0 +  (  0 +  ) (  0 )+  In more general cases, can be approximated by a linear function Generic parameter  Sensitivity (of wrt  )

Sensitivity and Elasticity Elasticity  Log  Log  = (  )  (  )  (  ) Relative change in vs relative change in  Matrix element  = m ij Lower-level parameter e.g.  = f 1 or  = s 1 Sensitivity  Absolute change in vs Absolute change in 

Sensitivity to matrix element: perturbation analysis  m ij ? M V = V [1] (M+dM)(V+dV) = ( +d ) (V+dV) [2] M V + dM V + MdV + dM dV = V + d V +  dV + d dV [2’] M V + dM V + MdV + dM dV = V + d V +  dV + d dV From [1]: M V + dM V + MdV = V + d V +  dV [2’’] U’ x [2’’]: U’dM V + U’MdV = U’d V +  U’dV U’M=  U’: U’dM V = U’d V i.e. U’dM V = d U’V [3]

Sensitivity to matrix element: perturbation analysis for a change in a single matrix term m ij  m ij +dm ij, 0 0 … 0 … 0 … dm ij … 0 dM = … … 0 0 … 0 hence U’dM V = u i v j dm ij [4]

Sensitivity to matrix element: perturbation analysis  m ij ? U’dM V = d U’V [3] U’dM V = u i v j dm ij [4]  m ij = u i v j / U’V = u i v j /  u i v i a beautiful result due to Hal Caswell (1978)  Log  Log m ij = u i v j / U’V = m ij u i v j /  u i v i

Sensitivity to matrix element: perturbation analysis V = ( v i ), under  v i = 1, is the stable structure m ij u i v j /, under  u i v i = 1, is the relative contribution, in asymptotic regime, of component i to component j, expressed with reproductive value as the currency. As a consequence, elasticities (wrt the m ij ) sum up to 1 Normalization used in general obvious from the context. When speaking of sensitivity, we will use  u i v i = 1 Then,  m ij = u i v j

Sensitivity to lower-level par. the chain rule  ?  =  i  j (  m ij  m ij  ) Barn Swallow example:  s 0 =  m 11  m 11  s 0 +  m 12  m 12  s 0 = u 1 v 1 f 1 + u 1 v 2 f 2 s 0  s 0 = u 1 ( v 1 f 1 s 0 + v 2 f 2 s 0 ) MV = V  v 1 f 1 s 0 + v 2 f 2 s 0 =  v 1, hence the elasticity of wrt s 0 : s 0 / s 0 = u 1 v 1