Analyzing spatial nonstationarity in multivariate relationships UVM math/stat department lecture, march 31, 2004 By Austin Troy 1 1.University of Vermont,

Slides:



Advertisements
Similar presentations
Introduction Describe what panel data is and the reasons for using it in this format Assess the importance of fixed and random effects Examine the Hausman.
Advertisements

Brief introduction on Logistic Regression
Objectives 10.1 Simple linear regression
Irwin/McGraw-Hill © Andrew F. Siegel, 1997 and l Chapter 12 l Multiple Regression: Predicting One Factor from Several Others.
1 SSS II Lecture 1: Correlation and Regression Graduate School 2008/2009 Social Science Statistics II Gwilym Pryce
Regression Analysis Using Excel. Econometrics Econometrics is simply the statistical analysis of economic phenomena Here, we just summarize some of the.
LECTURE 3 Introduction to Linear Regression and Correlation Analysis
GIS and Spatial Statistics: Methods and Applications in Public Health
More than just maps. A Toolkit for Spatial Analysis GUI access to the most frequently used tools ArcToolbox – an expandable collection of ready-to-use.
Section 4.2 Fitting Curves and Surfaces by Least Squares.
Chapter 12 Simple Regression
Chapter 12 Multiple Regression
Chapter 13 Introduction to Linear Regression and Correlation Analysis
© 2003 Prentice-Hall, Inc.Chap 14-1 Basic Business Statistics (9 th Edition) Chapter 14 Introduction to Multiple Regression.
The Simple Regression Model
Clustered or Multilevel Data
Why sample? Diversity in populations Practicality and cost.
Linear Regression and Correlation Analysis
Lecture 23 Multiple Regression (Sections )
SA basics Lack of independence for nearby obs
One-way Between Groups Analysis of Variance
Chapter 14 Introduction to Linear Regression and Correlation Analysis
Stat 112: Lecture 9 Notes Homework 3: Due next Thursday
SPSS Session 4: Association and Prediction Using Correlation and Regression.
Review for Final Exam Some important themes from Chapters 9-11 Final exam covers these chapters, but implicitly tests the entire course, because we use.
Multiple Linear Regression A method for analyzing the effects of several predictor variables concurrently. - Simultaneously - Stepwise Minimizing the squared.
IS415 Geospatial Analytics for Business Intelligence
Copyright © 2011 Pearson Education, Inc. Multiple Regression Chapter 23.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Inference for regression - Simple linear regression
Chapter 13: Inference in Regression
Inference in practice BPS chapter 16 © 2006 W.H. Freeman and Company.
STA291 Statistical Methods Lecture 31. Analyzing a Design in One Factor – The One-Way Analysis of Variance Consider an experiment with a single factor.
Introduction to Linear Regression
Chap 12-1 A Course In Business Statistics, 4th © 2006 Prentice-Hall, Inc. A Course In Business Statistics 4 th Edition Chapter 12 Introduction to Linear.
The Land Leverage Hypothesis Land leverage reflects the proportion of the total property value embodied in the value of the land (as distinct from improvements),
Geographic Information Science
Various topics Petter Mostad Overview Epidemiology Study types / data types Econometrics Time series data More about sampling –Estimation.
Repeated Measurements Analysis. Repeated Measures Analysis of Variance Situations in which biologists would make repeated measurements on same individual.
Ordinary Least Squares Estimation: A Primer Projectseminar Migration and the Labour Market, Meeting May 24, 2012 The linear regression model 1. A brief.
Chapter 10: Analyzing Experimental Data Inferential statistics are used to determine whether the independent variable had an effect on the dependent variance.
MGS3100_04.ppt/Sep 29, 2015/Page 1 Georgia State University - Confidential MGS 3100 Business Analysis Regression Sep 29 and 30, 2015.
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
PCB 3043L - General Ecology Data Analysis. OUTLINE Organizing an ecological study Basic sampling terminology Statistical analysis of data –Why use statistics?
Regression Analysis A statistical procedure used to find relations among a set of variables.
Taking ‘Geography’ Seriously: Disaggregating the Study of Civil Wars. John O’Loughlin and Frank Witmer Institute of Behavioral Science University of Colorado.
Spatial Statistics in Ecology: Point Pattern Analysis Lecture Two.
Economics 173 Business Statistics Lecture 19 Fall, 2001© Professor J. Petry
Correlation & Regression Analysis
Chapter 8: Simple Linear Regression Yang Zhenlin.
PCB 3043L - General Ecology Data Analysis.
Statistical methods for real estate data prof. RNDr. Beáta Stehlíková, CSc
1 HETEROSCEDASTICITY: WEIGHTED AND LOGARITHMIC REGRESSIONS This sequence presents two methods for dealing with the problem of heteroscedasticity. We will.
More on regression Petter Mostad More on indicator variables If an independent variable is an indicator variable, cases where it is 1 will.
Data Screening. What is it? Data screening is very important to make sure you’ve met all your assumptions, outliers, and error problems. Each type of.
Stats Methods at IC Lecture 3: Regression.
Chapter 13 Simple Linear Regression
Linear Regression.
PCB 3043L - General Ecology Data Analysis.
Regression Analysis Simple Linear Regression
Chapter 11 Simple Regression
12 Inferential Analysis.
CHAPTER 29: Multiple Regression*
Chapter 14: Analysis of Variance One-way ANOVA Lecture 8
Spatial Data Analysis: Intro to Spatial Statistical Concepts
12 Inferential Analysis.
Product moment correlation
MGS 3100 Business Analysis Regression Feb 18, 2016
Presentation transcript:

Analyzing spatial nonstationarity in multivariate relationships UVM math/stat department lecture, march 31, 2004 By Austin Troy 1 1.University of Vermont, Rubenstein School of Environment and Natural Resources

Spatial Nonstationarity The notion that relationships change across space. –That is, the relationship between y and (x 1,x x n ) is non constant from one location to the next. Can a variable be included in a model that proxies for space? Sometimes this is possible, but very often the factors that make one location different from another are non-quantifiable, or involve extremely complex interactions that cannot be parsimoniously modeled. Especially in social research

Approaches to nonstationarity There are several approaches to dealing with this problem. Among them are: 1.Create zones of homogeneity and stratify 2.Allow parameters to vary constantly

First Approach 1.Parameterize a global model and look at the residuals to detect patterns

First Approach Use patterns in residuals to define patches Specify a separate equation for each patch, or stratum

Second Approach  ’s vary continuously as a function of location (u,v) at each point i (Fotheringham 2002) 2. Continuous variation: Create measures of statistical relationships that are continuously varying across space, such as GWR

GWR Weighted moving window regression method developed by Foterhingham and Brundson (2000, 2002), building on works of Hastie and Tibshirani (1990) and Loader (1999) Expanded form of simple multiple regression equation Coefficients are deterministic functions of location in space Uses weighted least squares approach Fully unbiased estimate of local coefficient is impossible, but estimates with only slight bias are possible

GWR estimation Separate regression is run for each observation, using a spatial kernel that centers on a given point and weights observations subject to a distance decay function. Can used fixed size kernel or adaptive kernel to determine number of local points that will be included in each local regression Adaptive kernels used when data is not evenly distributed

GWR kernel From Fotheringham, Brundson and Charlton Geographically Weighted Regression GWR with fixed kernelGWR with adaptive kernel Points are weighted based on distance from center of kernel e.g. Gaussian kernel where weighting is given by: w i (g) = exp[-1/2(d ij /b) 2 where b is bandwidth

GWR kernel Adaptive kernel width is determined through minimization of the Akaike Information Criterion where tr(S) is the trace of the hat matrix and n is the number of observations

Bias and variance tradeoff Tradeoff between bias and standard error The smaller the bandwidth, the more variance but the lower the bias, the larger the bandwidth, the more bias but the more variance is reduced This is because we assume there are many betas over space and the more it is like a global regression, the more biased it is. AIC minimization provides a way of choosing bandwidth that makes optimal tradeoff between bias and variance.

GWR outputs The result is a statistical output showing global summary stats and parameters estimates, local model summary stats and non-stationarity stats for each parameter Also produces a map output of points with parameter estimates, standard errors and “pseudo t statistics” for each variable for each point

A simple (?) question we can address with this approach How is proximity to trees and other “green assets” reflected in property values?

We can ask this with hedonic analysis Econometric method for disaggregating observed housing prices into a series of unobserved “implicit” prices, reflective of WTP for a given marginal change in attribute Price= fn(structure, lot, location) Has been used extensively for valuing environmental amenities and disamenities

Hedonics and space Hedonic analysis is generally taken to be spatially stationary; that is, it is assumed that marginal WTP for an attribute is fixed within a housing market. Usually these markets are assumed to be quite big and tools for determining these market boundaries are poorly developed (see approach 1, earlier)

Case Study:Two Scales of analysis 1.Block group level analysis of median housing price as fn of tree cover, controlling for many other factors 2.Property level analysis of housing prices as a function of tree cover and parks, controlling for structural, locational and environmental attributes (hedonics) Research is part of the Baltimore Ecosystem Study, an NSF-LTER

Block group analysis Initial pattern of housing values tells us what we know intuitively: suburbs are more valuable than central city, but value goes down again as you get too far from city center

Block group analysis Average canopy cover by block group, as derived from 2000 USGS 30 m canopy cover analysis

Running GWR at the BG level Given the spatial dependence in the data, it is likely that the coefficients of a multivariate relationship will be related to the spatial processes underlying the spatial dependence. Hence, we choose to run GWR and compare it to global regression.

Equation Predictor variables= Median age Percent with income greater than $100k Median household income Percent owner occupied housing Percent vacant buildings Percent single family detached homes Mean tree canopy percentage Median number of rooms per house Median age of housing Percent with mortgage Percent high school educated Percent African American Percent Protected Land

Data This was run using census block group data from 2000 for the five counties in the Baltimore Metro Area Observational problem: while population of BGs is relatively constant, size is not, hence may be some form of Modifiable Areal Unit Problem; may be varying levels of heterogeneity within block groups

Results: Global Model Global model: highly significant Canopy is significant at α =.05, while percent protected land is not All other control variables are

Comparison of Local and Global The ANOVA tests the null hypothesis that the GWR model represents no improvement over a global model and rejects the null Notice also that Coefficient of Determination increases significantly and AIC decreases

Local Parameter variabilty We also conduct a Monte Carlo Significance test which finds that almost all variables are spatially non- stationary, although protected land is not

How does this testing work? We expect all parameters to have slight spatial variations; is that variation sufficient to reject the null hypothesis that it is globally fixed? If so, then any permutation of regression variable against locations is equally likely, allowing us to model a null distribution of the variance A Monte Carlo approach is adopted to create this distribution in which the geographical coordinates of the observations are randomly permuted aginst the variables n times; results in n values of the variance of the coefficient of interest which we use as an experimental distribution We can then compare the actual value of the variance against these values to obtain the experimental significance level

Local R squared shows that model fit varies by locations

Notice also how under GWR, there is no pattern to the residuals because it accounts for spatial effects

Results show that the city center is where tree canopy cover is valued highest on the margin; this might be because trees are scarcest there

However, the pseudo t statistic reveals that not all areas are significant

When we blank out the non- significant observations, we see that trees are only significantly reflected in property values in some areas, and it’s negative in Howard County

Interpretation In some areas, canopy cover is valued more highly than in others. The degree to which canopy is valued may relate to the scarcity or spatial distribution of trees at multiple scales. It may be negative in Howard county because trees are associated with some other factor, like “ruralness,” which is not properly quantified by the census and which exerts downward influence on housing prices

Other variables We’ll ignore protected land, since it’s not non-stationary. However, some of the other control variables are telling. Let’s look at median house age.

Pattern makes sense: older homes yield a positive premium in the more affluent suburb, but a negative premium in the poorer central city

When we do a cluster analysis (PAM) based on 3 parameter values (canopy, house age and number of rooms), we get something looking like markets

Here’s what it looked like with 6 Note that the silhouette score for the PAM was.55 for the 4 class and.5 for the 6 class— strong structuring

Problems with this approach Using block group level data is not ideal for a number of reasons –Modifiable Areal Unit Problem –BG obscures significant within unit heterogeneity –Level of heterogeneity (variance) varies between observations –There may be spatial clustering within BGs –Data is not very accurate at this level –Attribution not great either

Property Level Analysis Property level analysis is way of dealing with the local spatial heterogeneity and of getting much better attribution Used Maryland Property View data set for half of Baltimore City Regressed log price against 22 variables, including structural, neighborhood and locational variables, including several environmental variables

Simple plot of log price shows that while there are slight patterns, there is still considerable heterogeneity over space and no clear patches are emergent Moreover, price is only one factor and social patches are defined on a number of factors

Plotting out log price normalized by square footage gives a similarly heterogeneous result Other plots of other variables also show similar heterogeneity

Overall results Local model was a considerable improvement over the global model

Non-stationarity of parameters Monte Carlo significance test used to determine whether parameters are significantly non-stationary All but 4 were at the  =.05 significance level

Plotting out Parameter Estimates and T values

Parameter on dummy variable for trees within 20 meters

T value on tree dummy variable The only real significant relationship is on east side of study area Note that 1458 out of 2350 observations had a “1” for this value. A better approach for future would be tree density index

Parameter value on distance to nearest park

T value on Distance to nearest park Areas where parks are highly valued in the real estate market

Number of Baths Parameter values show clear patchiness Here an additional bathroom adds 6- 9% to home value Here it adds 1.5 to 3%

T value of parameter on number of bathrooms shows that number of bathrooms only significantly related to price in northern neighborhoods (small dots are observations where parameter estimate is insignificant)

Parameter value on dummy for single family home

T value on SFH dummy Note that even though there are SFHs in the south, SFH status only appears to add value in the North

Parameter values on Age of Structure Shows clear patchiness

T values on Age of Structure Shows age is most significantly related to price near center of town. Only in northeast is it positively related, suggest that older homes have “historic” value there but not elsewhere

Generating fuzzy Surfaces With this data, surfaces can be interpolated showing the change in parameter values over space Unfortunately, this only works well where most points have significant parameter estimates

Surface Example: Parameter on Distance to park Interpolation allows us to much more clearly see “patches” because it smoothes out some of the within-group heterogeneity

Surface Example: Parameter on structure age

T Value on structure age Note that interpolation of t value looks almost the same

Combining interpolated layers In a preliminary attempt we took four interpolated layers for dummy variables (hence all on the same scale), including structure age>80, trees w/in 20 meter, SFH and brick and added them together to get a test composite layer; the more layers we add, the more we begin to see clear differentiations between neighborhoods

Patch Classification This can then be used to classify areas into distinct patches or zones These zone boundaries are based on a vector of parameters from the multivariate equation This is extremely preliminary—an example for display

Conclusion GWR offers an excellent way to define meaningful socio-economic boundaries It allows us to look at the spatial arrangement of the relationship between y and z, controlling for all other variation In particular, it offers an excellent way to look at the spatial variability of social phenomena, which are often mediated by spatial processes that cannot be quantified, like trying to stay friendly with your neighbors and neighborhood association Suggests that preferences do vary over space and that future analyses must take this into account