May 25, 2007 1 Atlanta Regional Commission ARC Zapping and Dis-aggregation of Socio-Economic Data using GIS Project Team: Mike Alexander (ARC), Guy Rousseau.

Slides:



Advertisements
Similar presentations
Feedback Loops Guy Rousseau Atlanta Regional Commission.
Advertisements

H EURISTIC S OLVER  Builds and tests alternative fuel treatment schedules (solutions) at each iteration  In each iteration:  Evaluates the effects of.
Lecture (11,12) Parameter Estimation of PDF and Fitting a Distribution Function.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
GIS and Transportation Planning
BEN ANDERSON PROJECT MANAGER UNIVERSITY OF LOUISVILLE CENTER FOR HAZARDS RESEARCH AND POLICY DEVELOPMENT Using Dasymetric Mapping.
Evaluating the future: forecasting urban development using the urbansim land use model in el paso, tx. Quinn P. Korbulic.
“Old Approach to Needs Analysis” The standard practice in Oregon has been to extrapolate forward the past 5 or more years in housing production as the.
3680 Avalon Park Blvd E Orlando, FL Phone: Fax: Future Land Use Allocation Model (FLUAM)
Scenario Planning with Envision Tomorrow
 Matt Gates.  Forecast sprawl  Guide long-range population and employment forecasts o For the region o For individual counties  Allocate growth to.
Lec 8, TD: part 1, ch.5-1&2; C2 H/O: pp : Urban Transportation Planning, Intro. Urban transportation planning process and demand forecasting Short-
19 th Advanced Summer School in Regional Science Combining Vectors and Rasters in ArcGIS.
MAE 552 – Heuristic Optimization Lecture 27 April 3, 2002
Agenda Overview Why TransCAD Challenges/tips Initiatives Applications.
SIMPLE LINEAR REGRESSION
Basic Business Statistics, 10e © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Basic Business Statistics.
Lec 20, Ch.11: Transportation Planning Process (objectives)
Luci2 Urban Simulation Model John R. Ottensmann Center for Urban Policy and the Environment Indiana University-Purdue University Indianapolis.
Lec 15 LU, Part 1: Basics and simple LU models (ch6.1 & 2 (A), ch (C1) Get a general idea of urban planning theories (from rading p (A)
Tom Williams, AICP, TTI Geena Maskey, CAMPO.  Need a System that Combines: ◦ Sound Technical Process ◦ Engage Local Planners  Technical Process to put.
SIMPLE LINEAR REGRESSION
The Kruskal-Wallis Test The Kruskal-Wallis test is a nonparametric test that can be used to determine whether three or more independent samples were.
Introduction to Simulated Annealing 22c:145 Simulated Annealing  Motivated by the physical annealing process  Material is heated and slowly cooled.
SIMPLE LINEAR REGRESSION
Business Statistics, A First Course (4e) © 2006 Prentice-Hall, Inc. Chap 9-1 Chapter 9 Fundamentals of Hypothesis Testing: One-Sample Tests Business Statistics,
Regional GIS Users Group Meeting Wednesday, February 18 th :00 AM - Noon Amphitheater, C – Level NEXT MEETING MAY 13
Spatial Analyst Toolbox Lecture 17. Spatial Analyst Tool Sets  Conditional  Density  Distance  Generalization  Ground Water  Interpolation  Conditional.
Computational Stochastic Optimization: Bridging communities October 25, 2012 Warren Powell CASTLE Laboratory Princeton University
ENVISION TOMORROW UPDATES AND INDICATORS. What is Envision Tomorrow?  Suite of planning tools:  GIS Analysis Tools  Prototype Builder  Return on Investment.
Development of a GIS-Based Tool for Visualizing Land Use-Transportation Interactions & Transit Planning Fang Zhao, Florida International University Scott.
CorPlan: Place Based Scenario Planning Tool
Kevin Behm – Addison County Regional Planning Commission Community Build-Out Analysis Nov 5, 2007 New techniques - for an old friend.
Framework for Phase 2 Modeling Function of the Model: A platform for discussing alternative growth scenarios in Metro Boston and the associated tradeoffs.
The Argument for Using Statistics Weighing the Evidence Statistical Inference: An Overview Applying Statistical Inference: An Example Going Beyond Testing.
Cristian Urs and Ben Riveira. Introduction The article we chose focuses on improving the performance of Genetic Algorithms by: Use of predictive models.
The AIE Monte Carlo Tool The AIE Monte Carlo tool is an Excel spreadsheet and a set of supporting macros. It is the main tool used in AIE analysis of a.
The AIE Monte Carlo Tool The AIE Monte Carlo tool is an Excel spreadsheet and a set of supporting macros. It is the main tool used in AIE analysis of a.
Using ArcView to Create a Transit Need Index John Babcock GRG394 Final Presentation.
Statistical Peril in the Transportation Planning Polygon Kevin Hathaway, Colin Smith, & John Gliebe May 2013.
Regional GIS User Group Meeting Wednesday, May 13 th :00 AM - Noon Amphitheater, C – Level NEXT MEETING SEPTEMBER 9th
Why Is It There? Getting Started with Geographic Information Systems Chapter 6.
RPS Modeling Results Presentation to RPS Policy Committee Brian Gregor Transportation Planning Analysis Unit June 6,
1 Statewide Land-Use Allocation Model for Florida Stephen Lawe, John Lobb & Kevin Hathaway Resource Systems Group.
Avalon Park Retail Market Analysis Fishkind and Associates, Inc Corporate Blvd. Orlando, FL December 12, 2008.
Laurence Brown Indiana DOT John Ottensmann Indiana University-Purdue University Indianapolis Jon Fricker Purdue University Li Jin Kittelson & Associates,
HELIOS: Household Employment and Land Impact Outcomes Simulator FLORIDA STATEWIDE IMPLEMENTATION Development & Application Stephen Lawe RSG Michael Doherty.
Evaluating Transportation Impacts of Forecast Demographic Scenarios Using Population Synthesis and Data Simulation Joshua Auld Kouros Mohammadian Taha.
FORS 8450 Advanced Forest Planning Lecture 11 Tabu Search.
Statistics for Managers Using Microsoft Excel, 4e © 2004 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Fundamentals of Hypothesis Testing: One-Sample Tests Statistics.
The Capacity of Hope: Developing a Regional Build-Out Model with GIS Martin Kim, Tom Harner, Kathryn Youra Polk The Capacity of Hope: Developing a Regional.
Introduction to GIS Lecture 2: Part 1. Understanding Spatial Data Structures Part 2. Legend editing, choropleth mapping and layouts Part 3. Map layouts.
: An alternative representation of level of significance. - normal distribution applies. - α level of significance (e.g. 5% in two tails) determines the.
Network Simplex Animations Network Simplex Animations.
Exploring Microsimulation Methodologies for the Estimation of Household Attributes Dimitris Ballas, Graham Clarke, and Ian Turton School of Geography University.
URBDP 591 A Lecture 16: Research Validity and Replication Objectives Guidelines for Writing Final Paper Statistical Conclusion Validity Montecarlo Simulation/Randomization.
Chapter 13 Understanding research results: statistical inference.
Chapter 14: Analysis of Variance One-way ANOVA Lecture 9a Instructor: Naveen Abedin Date: 24 th November 2015.
15.082J and 6.855J and ESD.78J Network Simplex Animations.
Technology Options & Cost of Increasing Access to Electricity in Taraba State, Nigeria Uduak Akpan +, Salisu Isihak, and Sanusi Ohiare
Why Is It There? Chapter 6. Review: Dueker’s (1979) Definition “a geographic information system is a special case of information systems where the database.
Testing for a difference
Chapter Nine Hypothesis Testing.
Density Estimation Converts points to a raster
RPS Modeling Results Second Round
Network Simplex Animations
Chapter 14: Analysis of Variance One-way ANOVA Lecture 8
GIS Lecture: Geoprocessing
Jim Lam, Caliper Corporation Guoxiong Huang, SCAG Mark Bradley, BB&C
Network Simplex Animations
Presentation transcript:

May 25, Atlanta Regional Commission ARC Zapping and Dis-aggregation of Socio-Economic Data using GIS Project Team: Mike Alexander (ARC), Guy Rousseau (ARC), Claudette Dillard (ARC), Wei Wang(ARC), Patti Schropp (PBS&J), Mourad Bouhafs (PBS&J), Chris Simons (PBS&J), Stephen Bourne (PBS&J) Presented By: Stephen Bourne May 25, 2007

2 Goal - Disaggregate regional scale jobs/households projection into traffic analysis zone projection. Traversing Scales: Regional to Local, Population to Traffic Demand Census Tract Broken into N TAZs TAZ1 TAZ2 TAZ3 TAZ4 TAZ5 TAZ6 TAZ7 TAZ8 DRAM/EMPAL Regional Model Population Jobs & Households

May 25, ARC TAZ-Disaggregator

May 25, Current Method for Dis-aggregation: ZAPPER ZAPPER allocates change at CT level via known distribution at TAZ level. TAZ Census Tract Change in Variable + Current Distribution of Variable = Distribution of Change of Variable

May 25, Opportunity for Improvement in ZAPPER Distribution of variable (jobs, household) will change as land is used. Location of new jobs, household will be driven by likelihood of development. Basic Employment Commercial Employment Un-developable Residential Vacant This TAZ has a lot more vacant land than others. new distribution ≠ existing distribution. Is it more likely new development will occur in the heavily developed TAZ or the less developed TAZ? Allocation of projection must be driven by availability of land AND by likelihood of development. Heavily Developed Moderately Developed I-85 Major Roads

May 25, Develop a new tool to dis-aggregate ARC population and employment forecasts from large districts (Census Tracts) to traffic analysis zones (TAZ). GIS-based Tool, user-friendly Proximity analysis to drive allocation (roadways, interchanges, sewers, existing development, DRIs) Calibrated and Verified Scenario analysis (traditional growth, re-development, mixed use) Replaces current Zapper (FORTRAN-based programs) TAZ Disaggregator

May 25, Proximity Analysis to Drive Allocation Emulate development at parcel level, then aggregate up to TAZ Level… How to emulate Development? 1. Use Proximity Analysis to determine where new development goes… 2. Allocate Projected New development 1000 New Jobs 500 New HH Basic Employment Commercial Employment Un-developable Residential Vacant

May 25,  2 Major Roads Expressway Likelihood of Development: The Likelihood, ‘L’, Raster  3 Commercial LU L raster is composed of several term rasters, 1 for each factor. L(x,y) Category =  1,Category *F(x,y) 1 +  2,Category *F(x,y)  N,Category *F(x,y) N Likelihood, L 11 ++= Example L Raster for Commercial Landuse Each Landuse type will have a different L raster: commercial, basic employment, residential

May 25, Likelihood of Development: The Likelihood, ‘L’, Raster Potential Factors, (F) Major Roadway Expressway Interchange Sewer Availability Floodplain Previous Development (by Type) DRI LCI Central Tendency Proximity to Schools (Quality) Airport Noise L raster is composed of several term rasters, 1 for each factor. L(x,y) Category =  1,Category *F(x,y) 1 +  2,Category *F(x,y)  N,Category *F(x,y) N

May 25, Algorithm: Emulation of Development Create ½ acre grid Household Allocation Employment Allocation Find L Raster Allocate D/E projected Employment Changes (CT Level) For Each Employment Find L Raster Allocate For Each Income level D/E projected HH Changes by Income (CT Level) For Each Year Create Landuse Projection Create HH/Income Matrices TAZ level Create employment Estimates TAZ level

May 25, Emulating Development

May 25, TAZ Disaggregator Toolset ArcMap 9.2 Extension Table of Contents Panel Workflow-driven Tree-based Scenarios use differing factors and weights,  i. Traditional Growth High Redevelopment Creates Growth Animations

May 25, Setting up a Scenario for Calibration 1.Specify Business Layers Census-Tract Projections Density TAZ Polygons 2.Specify Landuse Layers Start Landuse Calibration Point Verification Point Landuse 3.Add Proximity Factors Proximity to Roads Proximity to Like Landuse Free to add any factor. 4.Run Calibration Tool Proceeds with Optimization routine Using Simplex Routine, tries weight sets until can’t improve on difference between actual landuse (at calibration point) and modeled landuse. List of Map Layers: Available as Proximity Factors or Business Layers. To set a layer as input in the tree, click on the node you want to specify in the tree, then double-click the layer you want to use in the list. Run Calibration by Double-clicking the run calibration node in the tree.

May 25, Calibration Process Emulation Process Find Calibration Projections Find Calibration SSE Set Weights Evaluate SSE Finished? Done Emulation process is same process run when doing a simple projection. These two processes are optionally run within the emulation code if running a calibration. The SSE is the measure of how close the modeled LU is to the actual LU. The meta-operation of calibration runs the basic emulation process many times to find the set of weights that results in modeled LU as close as possible to actual LU at the end of calibration period. Depending on the calibration procedure being run, the SSE produced by a single set of weights is compared to the other SSEs to determine what set of weights to try next.

May 25, Two Calibration Procedures to Find Best set of Weights 1.Brute Force Method 1.For specified resolution of factor weights (eg. 0.2 for weight = [0.0, 1.0]  0, 0.2, 0.4, 0.6, 0.8, 1.0) 2.For every permutation of weights subject to  i = 1.0, find SSE using calibration process Eg, if factors are Proximity to Expressway Ramp, Major Road, Like Landuse, try 0.0 * Expressway * MajorRoad * LikeLanduse 0.0 * Expressway * MajorRoad * LikeLanduse 0.0 * Expressway * MajorRoad * LikeLanduse … 3.Select lowest SSE method as best weights. Simplex Method Searches for lowest SSE by following SSE surface in the direction of the steepest downward gradient (explained in next slides)

May 25, Calibration Result: Brute Force Calibration shows this factor weight set gives best result - lowest SSE

May 25, Calibrating Weights: The Simplex Procedure Objective is to find weights,  i, that produce Model Landuse = Actual Landuse (measured with SSE) With Brute Force Procedure in this example (calculate every set of weights on a grid with  = 0.2), calculate 21 points on Objective Surface. With Simplex Searching Procedure, Calculate 10 points on Objective Surface. Simplex will always require fewer calculations than brute force methods. Extends to N Factors; Percentage of brute force calculation drops exponentially. Factor 1 Factor 2 Factor 3 Optimal SSE (0.2, 0.4, 0.4) Due to  i = 1.0 constraint, viable weight sets only exist on this plane for 3 Factor 8 6 SSE (yellow text) Each set of weights will produce an SSE (yellow text) Objective is to find weights with minimum SSE To find best set of weights, 1. Start with a triangle (simplex) where SSE is found for each vertex. 2. “Flip” triangle moving toward lower SSE (blue arrows) 3. Stop where no more improvement in SSE can be found

May 25, Calibrating Weights: Example With res = 0.1 Trough in SSE Surface Simplex is modified to avoid getting stuck here and providing a local optimum solution Each point represents a set of weights, eg. (0,0.2,0.8) and results in an SSE. The simplex procedure searches the surface for the lowest SSE.

May 25, Advantage of Simplex Process With simplex method same result is found, but requires calculation of only 12 out of 21 possible sets of weights. Savings in computation increase as number of factors increases, resolution becomes finer.

May 25, How powerful are our conclusions… Important to understand if the weights chosen have significant skill. Introduce a hypothesis test: Null Hypothesis: H0: The proximity analysis under the weights chosen does not skillfully predict development Alternative: H1: The proximity analysis under the weights chosen does skillfully predict development. Build Null Distribution by selecting weights at random 1000 times (Monte Carlo) and evaluating resulting SSE. Compare SSE found through calibration (or though user specification) with 95th percentile of null distribution. If SSE calib > SSE 95%, then can reject null hypothesis  model is skillful (really, model is not unskillful!). Greater difference between SSE calib and SSE 95% implies higher skill in prediction (greater power in hypothesis test). SSE (Actual,Model) SSE 95 th percentile 0 Null distribution Built by choosing weight sets randomly Finding resulting SSE SSE Calibration Larger this gap, more confident we are in skill of model 5% chance that SSE would fall here with randomly selected weights.

May 25, What about Re-development? Current Analyses (ZAPPER) look at development on Vacant Land. New Approach: TAZ Disaggregator will include re-development. Likelihood of re-development is based on different factors than vacant-land development. An override layer will be added to the tool that will allow analyst to add in known upcoming development. High likelihood will be allocated to override to ensure that projected new employment and HH will be allocated there first. Basic Employment Commercial Employment Un-developable Residential Vacant