Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.

Slides:



Advertisements
Similar presentations
Real-Time Projector Tracking on Complex Geometry Using Ordinary Imagery Tyler Johnson and Henry Fuchs University of North Carolina – Chapel Hill ProCams.
Advertisements

3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Parallel Tracking and Mapping for Small AR Workspaces Vision Seminar
Light Field Rendering Shijin Kong Lijie Heng.
Image-Based Modeling, Rendering, and Lighting
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Uncalibrated Geometry & Stratification Sastry and Yang
Multi-view stereo Many slides adapted from S. Seitz.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Computing With Images: Outlook and applications
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Image-Based Rendering using Hardware Accelerated Dynamic Textures Keith Yerex Dana Cobzas Martin Jagersand.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image-based Water Surface Reconstruction with Refractive Stereo Nigel Morris University of Toronto.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Modeling And Visualization Of Aboriginal Rock Art in The Baiame Cave
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
Синтез изображений по изображениям. Рельефные текстуры.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.
Metrology 1.Perspective distortion. 2.Depth is lost.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
Presented by Matthew Cook INFO410 & INFO350 S INFORMATION SCIENCE Paper Discussion: Dynamic 3D Avatar Creation from Hand-held Video Input Paper Discussion:
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh.
Tutorial Visual Perception Towards Computer Vision
112/5/ :54 Graphics II Image Based Rendering Session 11.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
IEEE International Conference on Multimedia and Expo.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
EECS 274 Computer Vision Projective Structure from Motion.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Advanced Computer Graphics
Multiple View Geometry
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
Paper – Stephen Se, David Lowe, Jim Little
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
© 2005 University of Wisconsin
Real-Time Image Mosaicing
What have we learned so far?
Image Based Modeling and Rendering (PI: Malik)
Multiple View Geometry for Robotics
CMSC 426: Image Processing (Computer Vision)
Presentation transcript:

Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang

Dana Cobzas-PhD thesis 3D Modeling in Computer Graphics  Graphics model: 3D detailed geometric model of a scene  Goal: rendering new views New view Real scene Geometric model + texture Rendering Aquisition Range sensors Modelers [Pollefeys & van Gool]

Dana Cobzas-PhD thesis Mapping in Mobile Robotics Robot Map sensors Map Building Localization/ Tracking Navigation environment  Navigation map: representation of the navigation space  Goal: tracking/localizing the robot

Dana Cobzas-PhD thesis Same objective: How to model existing scenes? Traditional geometry-based approaches: = geometric model + surface model + light model - Modeling complex real scenes is slow - Achieving photorealism is difficult - Rendering cost is related to scene complexity + Easy to combine with traditional graphics Alternative approach: image-based modeling: = non-geometric model from images - Difficult to acquire real scenes - Difficult to integrate with traditional graphics + Achieving photo-realism is easier if starting from real photos + Rendering cost is independent on scene complexity In this work we combine the advantages of both for mobile robotics localization and predictive display

Dana Cobzas-PhD thesis This thesis Investigates the applicability of IBMR techniques in mobile robotics. Questions addressed:  Is it possible to use an IBM as navigation map for mobile robotics?  Do they provide desired accuracy for the specific applications – localization and tracking?  What advantages do they offer compared to traditional geometric-based models?

Dana Cobzas-PhD thesis Approach Solution:  Reconstructed geometric model combined with image information  2 models Model1: calibrated: panorama with depth Model2: uncalibrated: geometric model with dynamic texture  Applications in localization/tracking and predictive display

Dana Cobzas-PhD thesis Model1: Panoramic model

Dana Cobzas-PhD thesis Model1: Overview Standard panorama: - no parallax, reprojection from the same viewpoint Solution – adding depth/disparity information: 1.Using two panoramic images for stereo 2.Depth from standard planar image stereo 3.Depth from laser range-finder

Dana Cobzas-PhD thesis Depth from stereo Cylindrical image-based panoramic models + depth map  Trinocular Vision System (Point Gray Research)

Dana Cobzas-PhD thesis Depth from laser range-finder 180 degrees panoramic mosaic Corresponding range data (spherical representation) Data from different sensors: requires data registration  CCD camera  Laser rangefinder  Pan unit

Dana Cobzas-PhD thesis Model 1: Applications Absolute localization:  Input: image+depth  Features: planar patches vertical lines  Input: intensity image  Assumes: approximate pose  Features: vertical lines Incremental localization: Predictive display:

Dana Cobzas-PhD thesis Model 2: Geometric model with dynamic texture

Dana Cobzas-PhD thesis Model 2: Overview Input images Model Applications Geometric model Dynamic texture Tracking Rendering

Dana Cobzas-PhD thesis Tracked features Structure from motion algorithm poses structure Geometric structure

Dana Cobzas-PhD thesis Dynamic texture I 1 t I Input Images Re-projected geometry Texture Variability basis

Dana Cobzas-PhD thesis 3D SSD Tracking  Goal: determine camera motion (rot+transl) from image differences  Assumes: sparse geometric model of the scene  Features: planar patches past motion current motion past warp current warp differential warp differential motion initial motion 3D Model

Dana Cobzas-PhD thesis Tracking example

Dana Cobzas-PhD thesis Tracking and predictive display  Goal: track robot 3D pose along a trajectory  Input: geometric model (acquired from images) and initial pose  Features: planar patches

Dana Cobzas-PhD thesis Thesis contributions Contrast calibrated and uncalibrated methods for capturing scene geometry and appearance from images: panoramic model with depth data (calibrated) geometric model with dynamic texture (uncalibrated) Demonstrate the use of the models as navigation maps with applications in mobile robotics absolute localization incremental localization model-based tracking predictive display

Dana Cobzas-PhD thesis Thesis questions  What advantages do they offer compared to traditional geometric based models?  The image information is used to solve data association problem.  Model renderings are used for predicting robot location for a remote user.  Do they provide desired accuracy for the specific applications – localization, tracking?  The geometric model (reconstructed from images) is used for localization/tracking algorithms. The accuracy of the algorithm depends on the accuracy of the reconstructed model.  The model accuracy can also be improved during navigation as different levels of accuracy are needed depending on the location (large space/narrow space) – future work.  Is it possible to use an image-based model as navigation map for mobile robotics?  A combination of geometric and image-based model can be used as navigation map.

Dana Cobzas-PhD thesis Comparison with current approaches Mobile Robotics Map + Image information for data association + Complete model that can be rendered – closer to human perception - Concurrent localization and matching (SLAM-Durrant-Whyte) - Invariant features (light, occlusion) (SIFT-Lowe) - Uncertainty in feature location (localization algorithms) Graphics Model (dynamic texture model-hybrid image+geometric model) + Easy acquisition: non-calibrated camera (raysets, geometric models) + Photorealism (geometric models) + Traditional rendering using the geometric model (raysets) -Automatic feature detection for tracking – larger scenes -Denser geometric model (relief texture) -Light-invariance (geometric models, photogrammetry)

Dana Cobzas-PhD thesis Future work Mobile Robotics Map  Improve map during navigation  Different ‘map resolutions’ depending on robot pose  Incorporate uncertainty in robot pose and features  Light, occlusion invariant features  Predictive display: control robot’s motion by ‘pointing’ o ‘dragging’ in image space Graphics Model (dynamic texture)  Automatic feature detection for tracking  Light-invariant model  Compose multiple models into a scene based on intuitive geometric constraints  Detailed geometry (range information from images)