Download presentation
Presentation is loading. Please wait.
Published byCharles Johns Modified over 9 years ago
1
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast Northern Ireland Presenter:Paul Kelly Co-author:Gordon Dodds
2
School of Electrical and Electronic Engineering, Queen’s University Belfast 2 Background Ground-level images give high resolution and multiple views Perspective transformation necessary to use images for change detection in 2-D Requires geographical knowledge of ground elevation, building outlines, etc. In many areas can use a Geographical Information System (GIS) to augment the images taken by a mobile system
3
School of Electrical and Electronic Engineering, Queen’s University Belfast 3 Why use GIS? Easy access to surveyed geographical data Use existing spatial analysis and processing functionality Already contains advanced visualisation capabilities that can be adapted for combination of observed images with GIS data Output of visual surveying will become an input to the GIS
4
School of Electrical and Electronic Engineering, Queen’s University Belfast 4 Methodology Outline 1. Camera calibration – Image correction 2. 3-D Database and view reconstruction 3. For each image frame –Camera location approximation (DGPS) –Accurate camera localisation using GIS data –Ground-level image / GIS processing 4. *Change detection and logging 5. *Path and mission planning for change mapping (* to be covered in later publications)
5
School of Electrical and Electronic Engineering, Queen’s University Belfast 5 Camera Use & Calibration Low-cost consumer Digital Video (DV) camera Images corrected for DV pixel aspect ratio and radial lens distortion (based on straight line-fitting) Focal length measured experimentally Calibrated also for colour and luminance for change detection
6
School of Electrical and Electronic Engineering, Queen’s University Belfast 6 Perspective Transformation of a Single GIS “Image” Camera calibration data / interior orientation parameters transferred to GIS 3-D visualisation module This enables –Photogrammetric calculations –Generation of “camera-eye views” in GIS Pixel-by-pixel mapping to real world co- ordinates GRASS GIS modified to facilitate this
7
School of Electrical and Electronic Engineering, Queen’s University Belfast 7 GIS “camera-eye view” of vector boundary data and GPS spot heights GIS 3-D View
8
School of Electrical and Electronic Engineering, Queen’s University Belfast 8 GIS 3-D View 3-D reverse look-up of point co-ordinates Easting:352552 m Northing:336353 m Elevation:16.93 m Do this for every pixel, combining with image (colour) data
9
School of Electrical and Electronic Engineering, Queen’s University Belfast 9 Multiple Images — Results
10
School of Electrical and Electronic Engineering, Queen’s University Belfast 10 Multiple Images — Results
11
School of Electrical and Electronic Engineering, Queen’s University Belfast 11 Multiple Images — Results
12
School of Electrical and Electronic Engineering, Queen’s University Belfast 12 Multiple Images — Results
13
School of Electrical and Electronic Engineering, Queen’s University Belfast 13 Multiple Images — Results
14
School of Electrical and Electronic Engineering, Queen’s University Belfast 14 Multiple Images — Results
15
School of Electrical and Electronic Engineering, Queen’s University Belfast 15 Multiple Images — Results
16
School of Electrical and Electronic Engineering, Queen’s University Belfast 16 Accurate Camera Localisation Image Data GIS Digital Elevation Model Camera Calibration Data Low-res. GPS Data GIS Vector Data Select vector features for visibility in image Segmentation & edge detection for these features Project vector features into image frame Determine bounding box of ROI Perform Modified Hough Transform on this ROI Update camera position from MHT results GIS Perspective Transform Model Final Calculated Position Iterate Initial position estimate
17
School of Electrical and Electronic Engineering, Queen’s University Belfast 17 Camera Location Approximation Low-cost 2-metre resolution GPS Yaw Pitch Roll inertial sensor Sensor fusion results in initial estimate of position (easting, northing, elevation) and orientation
18
School of Electrical and Electronic Engineering, Queen’s University Belfast 18 Match as many features as possible between GIS vector data (e.g. buildings, land features) and the raster-based camera image Use vector attributes from GIS to improve image processing Optimisation approach based on Modified Hough Transform Largest errors in RPY—image based information will significantly reduce these Accurate Camera Localisation
19
School of Electrical and Electronic Engineering, Queen’s University Belfast 19 Accurate Camera Localisation Image Data GIS Digital Elevation Model Camera Calibration Data Low-res. GPS Data GIS Vector Data Select vector features for visibility in image Segmentation & edge detection for these features Project vector features into image frame Determine bounding box of ROI Perform Modified Hough Transform on this ROI Update camera position from MHT results GIS Perspective Transform Model Final Calculated Position Iterate Initial position estimate
20
School of Electrical and Electronic Engineering, Queen’s University Belfast 20 GIS Data Initial approximation of observer position Measured low-res GPS points House (example GIS feature)
21
School of Electrical and Electronic Engineering, Queen’s University Belfast 21 GIS-aided Landmark Extraction 1. Distortion-corrected image acquired with vehicle-mounted DV Camera 2. Projected house outline from GIS 3-D view module 3. Arbitrary search ROI
22
School of Electrical and Electronic Engineering, Queen’s University Belfast 22 4. House found within ROI (using image processing) GIS-aided Landmark Extraction
23
School of Electrical and Electronic Engineering, Queen’s University Belfast 23 Update approximation of camera location until object positions coincide (normally 3 non co-planar objects) Simultaneously use many vector features from GIS data that may also be identified through image processing (hedges, walls etc.) Automatic Camera Localisation
24
School of Electrical and Electronic Engineering, Queen’s University Belfast 24 Requirements for extension to real-time usage Remote access to server running GIS and image processing Efficient GIS / mobile robot interfaces Use GIS attributes to select landmark features that are likely to have lowest image processing load Pre-planning of expected routing “images”
25
School of Electrical and Electronic Engineering, Queen’s University Belfast 25 Summary Calibrated camera images can be enhanced GIS electronic map data reduces image processing time and improved landmark extraction Automatic perspective transformation of multiple images & view reconstruction enables 3D changes to be found May be used in real-time with some efficiency improvements GIS use greatly improves efficiency in vision- based navigation and environmental analysis
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.