Downscaling of European land use projections for the ALARM toolkit Joint work between UCL : Nicolas Dendoncker, Mark Rounsevell, Patrick Bogaert BioSS: Adam Butler, Glenn Marion Athens ALARM meeting, January 2007
Overview Current day land use data are available at a relatively fine spatial resolution (e.g. CORINE), but land use projections within ALARM are generated at a much coarser resolution There is a need to convert these projections onto finer spatial scales in a way that properly reflects the statistical properties of high-resolution land use maps Using the downscaling method developed by Dendoncker et al. (2006) we aim to generate CORINE-scale projections of European land use under the ALARM scenarios
Assumptions Downscaling introduces additional error into land use projections - the unavoidable price of looking at a high spatial resolution The method relies on the assumption that the overall frequencies associated with different land types will change in the future, but that the spatial structure of the landspace will not change Current day land use is assumed to be known without error It is explicitly assumed that urban areas will remain urban
Outputs Land use maps at a CORINE scale (250 x 250m) for years under each of the ALARM scenarios, in terms of four broad land use types: urban, arable, grassland & forestry Maps could potentially be adapted to provide information on more detailed land classes (e.g. forest & grassland types, individual crops) – but this would require additional assumptions &/or need to make use of additional data Generic tools for applying land use downscaling to other data, & training materials to illustrate how the downscaling methods work
Some potential uses …as projected environmental data for local field studies e.g. FSN …as fine-scale inputs to mechanistic ecological models e.g. LPJ …as land use inputs for climate envelope analyses …as a resource for future ecological research, via the toolkit
Unresolved issues How can we best deal with land use classes that are only present in future scenarios e.g. surplus land, biofuels? How can we best deal with protected areas e.g. NATURA sites? How should we visualise downscaled land use maps, and how can we best incorporate these into the toolkit? Should we try to quantify & represent the uncertainties involved in land use projection? If so, how?
Example: Luxembourg
Statistical methodology Developed at UCL by Dendoncker et al. (2006): 1Fit a multinomial autologistic regression model to current CORINE land use data, which is at high spatial resolution 2Use ALARM scenarios data to calculate the marginal probabilities that will be associated with different land use classes in future; ensure that these vary smoothly over space 3Combine these sources of information using Bayes Theorem, in order to estimate the conditional probabilities associated with each of the land use classes at high spatial resolution 4Take the projected land class for a CORINE cell to be the class that has the highest conditional probability for that cell
1. Current land use Data: x ik = 1 if CORINE cell i has land use class k, 0 otherwise Model: we assume the probability that cell i belongs to class k conditionally upon the values of x Ik for all other cells I is where n ik denotes the number of cells in the neighbourhood of cell i that belong to class k and where k are unknown parameters The marginal probability associated with class k is equal to
2. Future land use Data: f jk = fraction of ALARM cell j that is projected to have land use class k (for a particular future year in a particular scenario) We assume the marginal probability that CORINE cell i will have land class k in future is equal to where d ij is the distance between the midpoints of cells i and j Using a weighted sum ensures smoothness; the value of q controls how smooth we would like the probability surface to be If cell i has the same midpoint as ALARM cell j then m ik = f jk
3. Bayes theorem Using Bayes Theorem we can calculate the future conditional probability that CORINE cell i belongs to land use class k as: For each cell i we need to rescale the C ik so that they sum to one, & so are valid probabilities; however this means that the marginal probabilities will no longer be equal to the values m ik that we computed using the ALARM scenario data We can use an iterative procedure to ensure that the rescaled conditional probabilities also respect these marginal probabilities; we alternate between: until the marginal probabilities have approximately converged to m ik after T steps
4. Prediction Finally, we predict that CORINE cell i will belong to the class that has the highest associated conditional probability, so that:
Computation The procedure is not inherently expensive to run, but the vast size of the baseline CORINE dataset means that computational issues will be the key technical problem in applying it at a pan- European scale Currently implemented using a mixture of: SAS, to fit the multinomial autologistic regression model matlab, for the remaining steps We are looking at the feasability of porting the code to R, with most of the heavy internal computation being done in Fortran90, so that it could easily be integrated into the toolkit Could calculations be done online, via the ALARM map portal?