R for Macroecology Spatial models. Next week  Any topics that we haven’t talked about?  Group projects.

Slides:



Advertisements
Similar presentations
6. Radial-basis function (RBF) networks
Advertisements

Tables, Figures, and Equations
Spatial autoregressive methods
Weighted Least Squares Regression Dose-Response Study for Rosuvastin in Japanese Patients with High Cholesterol "Randomized Dose-Response Study of Rosuvastin.
R for Macroecology Spatial data continued. Projections  Cylindrical projections Lambert CEA.
R for Macroecology Tests and models. Smileys Homework  Solutions to the color assignment problem?
R for Macroecology Functions and plotting. A few words on for  for( i in 1:10 )
The impact of global signal regression on resting state networks
Introduction to Smoothing and Spatial Regression
Simple Linear Regression Analysis
Self-Organizing Maps Projection of p dimensional observations to a two (or one) dimensional grid space Constraint version of K-means clustering –Prototypes.
Spatial Autocorrelation and Spatial Regression
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
SPATIAL DATA ANALYSIS Tony E. Smith University of Pennsylvania Point Pattern Analysis Spatial Regression Analysis Continuous Pattern Analysis.
Introduction to Applied Spatial Econometrics Attila Varga DIMETIC Pécs, July 3, 2009.
Basic geostatistics Austin Troy.
Spatial Autocorrelation Basics NR 245 Austin Troy University of Vermont.
Lecture 7: Principal component analysis (PCA)
Principal Components Analysis Babak Rasolzadeh Tuesday, 5th December 2006.
Principal Component Analysis
Spatial autoregressive methods Nr245 Austin Troy Based on Spatial Analysis by Fortin and Dale, Chapter 5.
CHAPTER 19 Correspondence Analysis From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon.
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
Deterministic Solutions Geostatistical Solutions
SA basics Lack of independence for nearby obs
Why Geography is important.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Spatial Methods in Econometrics Daniela Gumprecht Department for Statistics and Mathematics, University of Economics and Business Administration, Vienna.
Tables, Figures, and Equations
Principal Component Analysis. Consider a collection of points.
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Applications in GIS (Kriging Interpolation)
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Stochastic Approach for Link Structure Analysis (SALSA) Presented by Adam Simkins.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Spatial Data Analysis Iowa County Land Values (1926)
1 The Venzke et al. * Optimal Detection Analysis Jeff Knight * Venzke, S., M. R. Allen, R. T. Sutton and D. P. Rowell, The Atmospheric Response over the.
ADC 2014 Applied Demography Conference San Antonio 1/8-10/14 Understanding Spatial Spillovers & Feedbacks Paul R. Voss University of North Carolina at.
Geographic Information Science
GEOSTATISICAL ANALYSIS Course: Special Topics in Remote Sensing & GIS Mirza Muhammad Waqar Contact: EXT:2257.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
1 Estimating Forest Biomass Using Geostatistics Techniques: a case Study of Rondônia, Southern Brazilian Amazon Marcio Sales Carlos Souza Phaedon Kyriakids.
1 Sample Geometry and Random Sampling Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking.
Exploratory Tools for Spatial Data: Diagnosing Spatial Autocorrelation Main Message when modeling & analyzing spatial data: SPACE MATTERS! Relationships.
What’s the Point? Working with 0-D Spatial Data in ArcGIS
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Vamsi Sundus Shawnalee. “Data collected under different conditions (i.e. treatments)  whether the conditions are different from each other and […] how.
Geo479/579: Geostatistics Ch7. Spatial Continuity.
September 28, 2000 Improved Simultaneous Data Reconciliation, Bias Detection and Identification Using Mixed Integer Optimization Methods Presented by:
1 Regression Review Population Vs. Sample Regression Line Residual and Standard Error of Regression Interpretation of intercept & slope T-test, F-test.
Special Topics in Geo-Business Data Analysis Week 3 Covering Topic 6 Spatial Interpolation.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Multivariate statistical methods. Multivariate methods multivariate dataset – group of n objects, m variables (as a rule n>m, if possible). confirmation.
Unsupervised Learning II Feature Extraction
EMPIRICAL ORTHOGONAL FUNCTIONS 2 different modes SabrinaKrista Gisselle Lauren.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
Synthesis.
Spatial statistics: Spatial Autocorrelation
LECTURE 10: DISCRIMINANT ANALYSIS
9.3 Filtered delay embeddings
Chapter 5 Part B: Spatial Autocorrelation and regression modelling.
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Geospatial Statistics
Instrumental Variables
LECTURE 09: DISCRIMINANT ANALYSIS
Seasonal Forecasting Using the Climate Predictability Tool
Eigenvalues and Eigenvectors
Presentation transcript:

R for Macroecology Spatial models

Next week  Any topics that we haven’t talked about?  Group projects

SAR Models  Augment standard OLS with an additional term to model the spatial autocorrelation  We’ll focus on error SAR models, which focuses on spatial pattern in the error part of the model OLSY = β X + ε SAR lag Y = ρ WY + β X + ε SAR error Y = β X + λ Wu+ ε Defining the spatial weights matrix, W, is crucial

Neighborhoods in R  spdep  dnearneigh()  knearneigh() dnearneigh(x, d1, d2, row.names = NULL, longlat = NULL) Coordinates (matrix or SpatialPoints) Minimum and maximum distances (in km if longlat = T) Returns a list of vectors giving the neighbors for each point

Neighborhoods in R  spdep  dnearneigh()  knearneigh() > x = c(1,3,2,5) > y = c(3,2,4,4) > n = dnearneigh(cbind(x,y),d1 = 0,d2 = 3) > n Neighbour list object: Number of regions: 4 Number of nonzero links: 10 Percentage nonzero weights: 62.5 Average number of links: 2.5 > str(n) List of 4 $ : int [1:2] 2 3 $ : int [1:3] $ : int [1:3] $ : int [1:2] attr(*, "class")= chr "nb" - attr(*, "nbtype")= chr "distance”...

Converting a neighborhood to weights nb2listw(neighbours, style="W", zero.policy=NULL) neighbors listwhat to do with neighborless points W = row standardized (rows sum to 1) B = binary (0/1) C = global standardized (all links sum to n) U = C/n S = variance stabilization (Tiefelsdorf et al. 1999)

Converting a neighborhood to weights > nb2listw(n,style = "W")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1] > nb2listw(n,style = "B")$weights [[1]] [1] 1 1 [[2]] [1] [[3]] [1] [[4]] [1] 1 1 > nb2listw(n,style = "C")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1] > > nb2listw(n,style = "S")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1]

Converting a neighborhood to weights > nb2listw(n,style = "W")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1] > nb2listw(n,style = "B")$weights [[1]] [1] 1 1 [[2]] [1] [[3]] [1] [[4]] [1] 1 1 > nb2listw(n,style = "C")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1] > nb2listw(n,style = "S")$weights [[1]] [1] [[2]] [1] [[3]] [1] [[4]] [1] Emphasizes weakly connected points Emphasizes strongly connected points Tries to balance

Lots of options – how to choose?  Define the neighborhood  Define the spatial weights matrix  Try things out!  Look for stability in model estimates  Look for residual autocorrelation

Defining the neighborhood - d #1. Small distance n = dnearneigh(cbind(x,y),d1 = 0, d2 = 0.1) w1 = nb2listw(n,zero.policy = T) #2. Medium distance n = dnearneigh(cbind(x,y),d1 = 0, d2 = 0.3) w2 = nb2listw(n,zero.policy = T) #2. Large distance n = dnearneigh(cbind(x,y),d1 = 0, d2 = 0.5) w3 = nb2listw(n,zero.policy = T) par(mfrow = c(1,4)) plot(x,y,axes = F,xlab = "",ylab = "") plot(w1,cbind(x,y)) plot(w2,cbind(x,y)) plot(w3,cbind(x,y))

Defining the neighborhood - K #4. 2 neighbors n = knn2nb(knearneigh(cbind(x,y),k=2,RANN = F)) w4 = nb2listw(n,zero.policy = T) #5. 4 neighbors n = knn2nb(knearneigh(cbind(x,y),k=4,RANN = F)) w5 = nb2listw(n,zero.policy = T) #6. 8 neighbors n = knn2nb(knearneigh(cbind(x,y),k=8,RANN = F)) w6 = nb2listw(n,zero.policy = T) par(mfrow = c(1,4)) plot(x,y,axes = F,xlab = "",ylab = "") plot(w4,cbind(x,y)) plot(w5,cbind(x,y)) plot(w6,cbind(x,y))

Neighborhoods on grids x = rep(1:20,20) y = rep(1:20,each = 20) plot(x,y) n = dnearneigh(cbind(x,y),d1=0,d2 = 1) w = nb2listw(n) plot(w,cbind(x,y)) n = dnearneigh(cbind(x,y),d1=0,d2 = sqrt(2)) w = nb2listw(n) plot(w,cbind(x,y)) Rook’s caseQueen’s case

Data size  SAR models can take a very long time to fit  2000 points is the maximum I have used  sample() is useful again

Fitting the SAR model  errorsarlm() errorsarlm(formula, listw, zero.policy=NULL) just like lm()what to do with neighborless points The neighborhood weights

Try it out  Build several SAR models with different W  Which one works best?

Spatial eigenvector maps  Generate new predictors that represent the spatial structure of the data  Three steps  Calculate a pairwise distance matrix  Do a principal components analysis on this matrix  Select some of these PCA axes to add to an OLS model

Spatial eigenvector maps Diniz-Filho and Bini 2005

Filter 1Filter 2 Filter 3Filter 4

Filter 10 Filter 20 Filter 30Filter 40