Download presentation
Presentation is loading. Please wait.
1
Multi-Polygon Paper for XYZ
Paper Contributions: Framework for Similarity Assessment of Contour Polygon Forests Framework for Correspondence Contouring for “new” Method considering the Contour Polygon Forests generated by Method A as “Ground Truth” Fast Algorithms that can cope with large numbers of polygons in the two frameworks Agreement Mapping Approaches—creating spatial and spatio-temporal maps of agreement Real-world case involving flood risk mapping Main Contributors: Yongli, Qian, Romita, maybe undergraduate students, maybe Jian, maybe Arturo.
2
…POLYGON Trees as Ground Truth for Other Methods…
Examples of Polygon Trees as Ground Truth: FEMA maps and other flood risk maps, elevation maps,… (need more); crime maps; reference the two papers! Need to use cheaper, parameterized, more automated methods for cost-reasons and to outdated data; e.g. FEMA updates the flood risk maps of a regions about every 10(???) years Paradigm: Train the cheaper, parameterized methods using polygon trees as a ground truth for regions were ground truth is available to obtain the best parameter settings; then, apply the parameterized method with the learnt parameter settings for unprocessed, underserved regions to produce contour polygon trees Applications include: Flood risk mapping Elevation maps with cheaper methods Crime Risk maps based on crime type density
3
Use Polygon Trees as a Ground Truth To Guide Spatial Analysis Algorithms
Mappers may prepare digital elevation models in a number of ways, but they frequently use remote sensing rather than direct survey data. One powerful technique for generating digital elevation models is interferometric synthetic aperture radar where two passes of a radar satellite (such as RADARSAT-1 or TerraSAR-X or Cosmo SkyMed), or a single pass if the satellite is equipped with two antennas (like the SRTM instrumentation), collect sufficient data to generate a digital elevation map tens of kilometers on a side with a resolution of around ten meters[citation needed]. Other kinds of stereoscopic pairs can be employed using the digital image correlation method, where two optical images are acquired with different angles taken from the same pass of an airplane or an Earth Observation Satellite (such as the HRS instrument of SPOT5 or the VNIR band of ASTER).[12] The SPOT 1 satellite (1986) provided the first usable elevation data for a sizeable portion of the planet's landmass, using two-pass stereoscopic correlation. Later, further data were provided by the European Remote-Sensing Satellite (ERS, 1991) using the same method, the Shuttle Radar Topography Mission (SRTM, 2000) using single-pass SAR and the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER, 2000) instrumentation on the Terra satellite using double-pass stereo pairs.[12] The HRS instrument on SPOT 5 has acquired over 100 million square kilometers of stereo pairs The quality of a DEM is a measure of how accurate elevation is at each pixel (absolute accuracy) and how accurately is the morphology presented (relative accuracy). Several factors play an important role for quality of DEM-derived products: terrain roughness; sampling density (elevation data collection method); grid resolution or pixel size; interpolation algorithm; vertical resolution; terrain analysis algorithm; Reference 3D products include quality masks that give information on the coastline, lake, snow, clouds, correlation etc. Methods for obtaining elevation data used to create DEMs[edit] Gatewing X100 unmanned aerial vehicleLidar Stereo photogrammetry from aerial surveys Structure from motion / Multi-view stereo applied to aerial photography[13] Block adjustment from optical satellite imagery Interferometry from radar data Real Time Kinematic GPS Topographic maps Theodolite or total station Doppler radar Focus variation Inertial surveys Surveying and mapping drones Range imaging
4
Approaches Using Polygon Trees as the Ground Truth
Correspondence Contouring Post process results based on the ground truth; e.g. intersect results with Fema Polygons. ????
5
“Our” InterpolationContouring Approach
What is unique about it? Are there other approaches in the literature? How do we incorporate density into our approach? What is the optimal grid size for the approach; how do we determine the optimal grid size? How can the approach benefit from ground truth/background knowledge in form of contour polygon trees? How precise are the contour polygons we get?
6
Multi-Threshold Finding Problem to Maximize Match
Problem Specification: Given a polygon forests PF with k levels that has been obtained for a dataset D and a method M if applied with thresholds 1…,k to dataset D’ obtains a Polygon forest P’; moreover, we have a polygon forest distance function dpf . We are trying to find PF’ such that: dpf (PF,PF’) is minimal with PF being PF=M(D’,(1…,k )) In other words, given D’, M, and PF we like to find values 1…,k such that dpf (PF, M(D’,(1…,k )) is minimal
7
Algorithm to Find the Best March DPF
Level-based Algorithm: For levels i=1 to i=k do {Find I such that dpf (M(D’, I )=level(i, PF) Is minimal} Return (1…,k ) Where level(I,PF) is a functions that returns all level i polygons; that is, this function returns are set of polygons.
8
Problems InterpolationContouring Framework
It looks to me that there is something fundamentally wrong for the results we get for the original hand datasets and we should try to understand what the problem is: Need to develop framework soon (visualization and analysis) that automatically allows us to debug and evaluate different frameworks/versions/parameter such as grid-size we develop Why do the results so much disagree with those for FEMA-intersected hand dataset? Why do we miss obvious hotspots? We also get hotspots outside FEMA risk zones and we should check, if those make any sense… We should add a statistical hotspot summary (hand mean, std and #address points in the hotspot) to the result facilitating interpretation. We should get a better understanding about the impart of the grid-size on the results We should better understand the impact of empty grid-cells on results---if there is any; in general, using smaller grid-sizes should enhance the precision of the obtained hotspot boundaries, but as a by-product we get empty cells. Obtained hotspot polygons seem to be too wide and might need some tightening; e.g. by computing the convex (concave??) hull. We should also check for bugs in the algorithms (and maybe even the dataset)
9
Alternative Inverse Distance Weighting
Remark: Likely we should find a package that does the same thing or something similar! Make a grid (or multiple grids using different grid sizes) Make a grid and fill it with values using two approaches: Interpolate the value in the center of the grid cell using only the points in the grid cell using inverse distance weighting ( ) If there are no points, report null as the value of the cell. Interpolate the value at the grid intersection points using inverse distance weighting just using the points in the 4 (3 at boundaries, 2 at corners) cells where the intersection point is part of the grid cell. ) If there are no points, report null as the value of the intersection point, Create Summaries using the annotated grids Using grid-based heart maps Using contouring algorithms Remark: We are already using a variation of approach b; perhaps we should implement/find a package for approach soon. Let us discuss approach when we talk on Tuesday.
10
UH-DAIS Research to Address these Problems
Computational Methods to create flood risk maps from point-wise or grid-based flood risk assessment (e.g. hand value maps or elevation maps). We investigated in the past and are currently investigating: graph-based approaches Continuous function based approaches Similarity Assessment Methods to find Agreement Between Different Polygonal Flood Risk Maps Correspondence Contouring Methods (e.g. find a sequence of elevation thresholds so that the obtained contour polygons best match with FEMA flood risk zones) Computing Agreement Maps between Different Methods Creating “better” flood risk maps by combining information from different sources. UH-DAIS
11
Other Project Priorities
Need to be able to extract different types of polygons based on their attribute values; distinguish the 3 levels of flood risk maps for Austin Fire and A and X-shaded/B flood risk zones in FEMA maps. Find and extract the flow data for AE FEMA flood risk zones Learn how to use HEC-RAS Romita/Qian should be able to run/modify Yongli’s program Is there any software available in ArcGIS (or R) that uses an interpolationcontouring approach? Are there any interpolation-based heat maps? Are there any good papers on the interpolationcontouring theme? Get our hands on digital elevation maps (DEM) and use them in a similar fashion as Travis County, FEMA, or Hand-based risk maps! Get the Gridded Hand Dataset by June 23 the latest! UH-DAIS
12
Other Things to Check Out
According to rumors, the US Corp of Engineers is working on a “new” HEC-RAS Checkout Situation Awareness for Everyone (SAFE; non-HEC-RAS approaches; very interesting company that makes business with flood control districts… Jared Allen (very interesting slide show ( UH-DAIS
13
Other Research Themes How can past knowledge from floods be used for flood risk assessment? What data do we actually have available from past floods? How can we make flood risk assessment approaches sensitive to current water levels and the anticipated amount of rainfall in the in the near future (e.g. in the next six hours)? Can the National Water Model be used for Flood Risk Assessment? If yes, how? How is it different from HEC-RAS? What could we do concerning creating flood risk maps for Wharton county? UH-DAIS
14
More “Modern Appraoches” for Flood Risk Mapping
Probabilistic flood risk modelling and mapping ( , ) Considering Boundary Conditions to deal with Backwater Flooding and other Boundaries( , , , , , ) UH-DAIS
15
Tentative List of Activities Romita/Qian through July 7, 2017
Tasks (sorted by order/priority) Explore visualization of Hand-dataset using ARC-GIS and other tools Read papers on Flood Risk Mapping Understand on how to run HEC-RAS… See what data are available for Wharton County. Get a better understanding of FEMA flood risk zone data Flow information in AE flood zones Getting all A polygons and B/X_shaded polygons Get you hand on LIDAR data in particular and DEM (digital evaluation maps in general); apply promising visualization techniques identified in a to those Continue acquiring knowledge in using ARC-GIS How can past knowledge from floods be used for flood risk assessment? What data do we actually have available from past floods?
16
Tentative List of Activities Romita/Qian July 3-August 31, 2017
Debug Yongli’s InterpolationContouring approach; understand why it is not producing desired results for the non-FEMA intersected Hand dataset. Enhance the approach InterpolationContouring approach and/or any other promising technique that has been identified in June and, if we find anything promising, (re-)implement this approach! Maybe meet with students of or Dr. Arturo Leon himself to learn how to run his better HEC-RAS framework. Work on paper to be submitted to XYZ (see slide 1) Develop/Enhance/Implement framework to analyze similarity—and other relationships—between different flood risk maps; e.g. hand-based, FEMA, DEM-based, HEC-RAS generated flood risk maps. Start to work on correspondence contouring approach Do more with DEMs Do something specific for Wharton County, if feasible Enhance you knowledge about ARC-GIS and maybe HEC-RAS
17
Distribution of Attribute Values
Here, we're basically talking more or less about interpolation methods. Methods include: IDW Depending on the implementation this can be global (using all available points in the set) or local (limited by number of points or maximum distance between points and interpolated position). Tools: QGIS interpolation plugin (global), GRASS v.surf.idw or r.surf.idw (local) Splines Again, huge number of possible implementations. B-Splines are popular. Tools: GRASS v.surf.bspline Kriging Statistical method with various sub-types. Tools: GRASS v.krige (thanks to om_henners for the tip) or using R.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.