GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.

Slides:



Advertisements
Similar presentations
REQUIRING A SPATIAL REFERENCE THE: NEED FOR RECTIFICATION.
Advertisements

For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong Three-dimensional curve reconstruction from.
Automatically Annotating and Integrating Spatial Datasets Chieng-Chien Chen, Snehal Thakkar, Crail Knoblock, Cyrus Shahabi Department of Computer Science.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
WFM 6202: Remote Sensing and GIS in Water Management © Dr. Akm Saiful IslamDr. Akm Saiful Islam WFM 6202: Remote Sensing and GIS in Water Management Akm.
Vision Based Control Motion Matt Baker Kevin VanDyke.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Workshop on Earth Observation for Urban Planning and Management, 20 th November 2006, HK 1 Zhilin Li & Kourosh Khoshelham Dept of Land Surveying & Geo-Informatics.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
Matt McKeever Jonathan Baker UAV Design Team 11/16/2006
INTEGRATION OF A SPATIAL MAPPING SYSTEM USING GPS AND STEREO MACHINE VISION Ta-Te Lin, Wei-Jung Chen, Fu-Ming Lu Department of Bio-Industrial Mechatronics.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Image Processing of Video on Unmanned Aircraft Video processing on-board Unmanned Aircraft Aims to develop image acquisition, processing and transmission.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
GIS Tutorial 1 Lecture 6 Digitizing.
Comparison of LIDAR Derived Data to Traditional Photogrammetric Mapping David Veneziano Dr. Reginald Souleyrette Dr. Shauna Hallmark GIS-T 2002 August.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Mobile Mapping Systems (MMS) for infrastructural monitoring and mapping are becoming more prevalent as the availability and affordability of solutions.
Image Processing & GIS Integration for Environmental Analysis School of Electrical & Electronic Engineering The Queen’s University of Belfast Paul Kelly.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
WASET Defence, Computer Vision Theory and Application, Venice 13 th August – 14 th August 2015 A Four-Step Ortho-Rectification Procedure for Geo- Referencing.
ARSF Data Processing Consequences of the Airborne Processing Library Mark Warren Plymouth Marine Laboratory, Plymouth, UK RSPSoc 2012 – Greenwich, London.
3D SLAM for Omni-directional Camera
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Use of GIS Methodology for Online Urban Traffic Monitoring German Aerospace Center Institute of Transport Research M. Hetscher S. Lehmann I. Ernst A. Lippok.
Qualitative Vision-Based Mobile Robot Navigation Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering Clemson University.
General ideas to communicate Show one particular Example of localization based on vertical lines. Camera Projections Example of Jacobian to find solution.
Introduction to Soft Copy Photogrammetry
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
University of California, Santa Barbara An Integrated System of 3D Motion Tracker and Spatialized Sound Synthesizer John Thompson (Music) Mary Li (ECE)
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
INTRODUCTION TO GIS  Used to describe computer facilities which are used to handle data referenced to the spatial domain.  Has the ability to inter-
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
Chapter 10.  Data collection workflow  Primary geographic data capture  Secondary geographic data capture  Obtaining data from external sources 
Visual Odometry David Nister, CVPR 2004
U.S. Department of the Interior U.S. Geological Survey Automatic Generation of Parameter Inputs and Visualization of Model Outputs for AGNPS using GIS.
©Roke Manor Research Ltd 2011 Part of the Chemring Group 1 Startiger SEEKER Workshop Estelle Tidey – Roke Manor Research 26 th February 2011.
Camera calibration from multiple view of a 2D object, using a global non linear minimization method Computer Engineering YOO GWI HYEON.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
GIS Project1 Physical Structure of GDB Geodatabase Feature datasets Object classes, subtypes Features classes, subtypes Relationship classes Geometric.
Roadway Center Line and Feature Extraction Remote Sensing in Transportation August 2001 HSA Consulting Group, Inc. Presentation to the National Consortium.
Signal and Image Processing Lab
Automatically Collect Ground Control Points from Online Aerial Maps
Depth Analysis With Stereo Cameras
Recent developments on micro-triangulation
Paper – Stephen Se, David Lowe, Jim Little
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Zhigang Zhu, K. Deepak Rajasekar Allen R. Hanson, Edward M. Riseman
Modeling the world with photos
Eric Grimson, Chris Stauffer,
Mixed Reality Server under Robot Operating System
Multiple View Geometry for Robotics
What's New in eCognition 9
Video Compass Jana Kosecka and Wei Zhang George Mason University
Sensor Fusion Localization and Navigation for Visually Impaired People
What's New in eCognition 9
What's New in eCognition 9
Presentation transcript:

GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast Northern Ireland Presenter:Paul Kelly Co-author:Gordon Dodds

School of Electrical and Electronic Engineering, Queen’s University Belfast 2 Background Ground-level images give high resolution and multiple views Perspective transformation necessary to use images for change detection in 2-D Requires geographical knowledge of ground elevation, building outlines, etc. In many areas can use a Geographical Information System (GIS) to augment the images taken by a mobile system

School of Electrical and Electronic Engineering, Queen’s University Belfast 3 Why use GIS? Easy access to surveyed geographical data Use existing spatial analysis and processing functionality Already contains advanced visualisation capabilities that can be adapted for combination of observed images with GIS data Output of visual surveying will become an input to the GIS

School of Electrical and Electronic Engineering, Queen’s University Belfast 4 Methodology Outline 1. Camera calibration – Image correction 2. 3-D Database and view reconstruction 3. For each image frame –Camera location approximation (DGPS) –Accurate camera localisation using GIS data –Ground-level image / GIS processing 4. *Change detection and logging 5. *Path and mission planning for change mapping (* to be covered in later publications)

School of Electrical and Electronic Engineering, Queen’s University Belfast 5 Camera Use & Calibration Low-cost consumer Digital Video (DV) camera Images corrected for DV pixel aspect ratio and radial lens distortion (based on straight line-fitting) Focal length measured experimentally Calibrated also for colour and luminance for change detection

School of Electrical and Electronic Engineering, Queen’s University Belfast 6 Perspective Transformation of a Single GIS “Image” Camera calibration data / interior orientation parameters transferred to GIS 3-D visualisation module This enables –Photogrammetric calculations –Generation of “camera-eye views” in GIS Pixel-by-pixel mapping to real world co- ordinates GRASS GIS modified to facilitate this

School of Electrical and Electronic Engineering, Queen’s University Belfast 7 GIS “camera-eye view” of vector boundary data and GPS spot heights GIS 3-D View

School of Electrical and Electronic Engineering, Queen’s University Belfast 8 GIS 3-D View 3-D reverse look-up of point co-ordinates Easting: m Northing: m Elevation:16.93 m Do this for every pixel, combining with image (colour) data

School of Electrical and Electronic Engineering, Queen’s University Belfast 9 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 10 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 11 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 12 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 13 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 14 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 15 Multiple Images — Results

School of Electrical and Electronic Engineering, Queen’s University Belfast 16 Accurate Camera Localisation Image Data GIS Digital Elevation Model Camera Calibration Data Low-res. GPS Data GIS Vector Data Select vector features for visibility in image Segmentation & edge detection for these features Project vector features into image frame Determine bounding box of ROI Perform Modified Hough Transform on this ROI Update camera position from MHT results GIS Perspective Transform Model Final Calculated Position Iterate Initial position estimate

School of Electrical and Electronic Engineering, Queen’s University Belfast 17 Camera Location Approximation Low-cost 2-metre resolution GPS Yaw Pitch Roll inertial sensor Sensor fusion results in initial estimate of position (easting, northing, elevation) and orientation

School of Electrical and Electronic Engineering, Queen’s University Belfast 18 Match as many features as possible between GIS vector data (e.g. buildings, land features) and the raster-based camera image Use vector attributes from GIS to improve image processing Optimisation approach based on Modified Hough Transform Largest errors in RPY—image based information will significantly reduce these Accurate Camera Localisation

School of Electrical and Electronic Engineering, Queen’s University Belfast 19 Accurate Camera Localisation Image Data GIS Digital Elevation Model Camera Calibration Data Low-res. GPS Data GIS Vector Data Select vector features for visibility in image Segmentation & edge detection for these features Project vector features into image frame Determine bounding box of ROI Perform Modified Hough Transform on this ROI Update camera position from MHT results GIS Perspective Transform Model Final Calculated Position Iterate Initial position estimate

School of Electrical and Electronic Engineering, Queen’s University Belfast 20 GIS Data Initial approximation of observer position Measured low-res GPS points House (example GIS feature)

School of Electrical and Electronic Engineering, Queen’s University Belfast 21 GIS-aided Landmark Extraction 1. Distortion-corrected image acquired with vehicle-mounted DV Camera 2. Projected house outline from GIS 3-D view module 3. Arbitrary search ROI

School of Electrical and Electronic Engineering, Queen’s University Belfast House found within ROI (using image processing) GIS-aided Landmark Extraction

School of Electrical and Electronic Engineering, Queen’s University Belfast 23 Update approximation of camera location until object positions coincide (normally 3 non co-planar objects) Simultaneously use many vector features from GIS data that may also be identified through image processing (hedges, walls etc.) Automatic Camera Localisation

School of Electrical and Electronic Engineering, Queen’s University Belfast 24 Requirements for extension to real-time usage Remote access to server running GIS and image processing Efficient GIS / mobile robot interfaces Use GIS attributes to select landmark features that are likely to have lowest image processing load Pre-planning of expected routing “images”

School of Electrical and Electronic Engineering, Queen’s University Belfast 25 Summary Calibrated camera images can be enhanced GIS electronic map data reduces image processing time and improved landmark extraction Automatic perspective transformation of multiple images & view reconstruction enables 3D changes to be found May be used in real-time with some efficiency improvements GIS use greatly improves efficiency in vision- based navigation and environmental analysis