View Planning Candidacy Exam Paul Blaer December 15, 2003.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Motion Planning for Point Robots CS 659 Kris Hauser.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Computing 3D Geometry Directly From Range Images Sarah F. Frisken and Ronald N. Perry Mitsubishi Electric Research Laboratories.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Visibility-based Motion Planning Lecture slides for COMP presented by Georgi Tsankov.
Segmentation into Planar Patches for Recovery of Unmodeled Objects Kok-Lim Low COMP Computer Vision 4/26/2000.
Silhouettes in Multiview Stereo Ian Simon. Multiview Stereo Problem Input: – a collection of images of a rigid object (or scene) – camera parameters for.
Active SLAM in Structured Environments Cindy Leung, Shoudong Huang and Gamini Dissanayake Presented by: Arvind Pereira for the CS-599 – Sequential Decision.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Automated Construction of Environment Models by a Mobile Robot Thesis Proposal Paul Blaer January 5, 2005.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Randomized Planning for Short Inspection Paths Tim Danner and Lydia E. Kavraki 2000 Presented by David Camarillo CS326a: Motion Planning, Spring
Motion Planning with Visibility Constraints Jean-Claude Latombe Computer Science Department Stanford University.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Sampling Design: Determine Where to Take Measurements Sampling Design: Determine Where to Take Measurements Empirical Approaches to Sensor Placement: Mobile.
Presented by David Stavens. Autonomous Inspection Compute a path such that every point on the boundary of the workspace can be inspected from some point.
Shape Modeling International 2007 – University of Utah, School of Computing Robust Smooth Feature Extraction from Point Clouds Joel Daniels ¹ Linh Ha ¹.
CS 326 A: Motion Planning Exploring and Inspecting Environments.
Randomized Planning for Short Inspection Paths Tim Danner Lydia E. Kavraki Department of Computer Science Rice University.
reconstruction process, RANSAC, primitive shapes, alpha-shapes
1 Efficient Placement and Dispatch of Sensors in a Wireless Sensor Network Prof. Yu-Chee Tseng Department of Computer Science National Chiao-Tung University.
Chapter 5: Path Planning Hadi Moradi. Motivation Need to choose a path for the end effector that avoids collisions and singularities Collisions are easy.
High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning Jeff Michels Ashutosh Saxena Andrew Y. Ng Stanford University ICML 2005.
Randomized Planning for Short Inspection Paths Tim Danner and Lydia E. Kavraki 2000 Presented by Dongkyu, Choi On the day of 28 th May 2003 CS326a: Motion.
Geometric Probing with Light Beacons on Multiple Mobile Robots Sarah Bergbreiter CS287 Project Presentation May 1, 2002.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Graphics Graphics Korea University cgvr.korea.ac.kr Creating Virtual World I 김 창 헌 Department of Computer Science Korea University
Dobrina Boltcheva, Mariette Yvinec, Jean-Daniel Boissonnat INRIA – Sophia Antipolis, France 1. Initialization Use the.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
A General Framework for Tracking Multiple People from a Moving Camera
Algorithms for Triangulations of a 3D Point Set Géza Kós Computer and Automation Research Institute Hungarian Academy of Sciences Budapest, Kende u
DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Third Quarterly IPR Meeting May 11, 1999 P. I.s: Leonidas J. Guibas and Jean-Claude.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Statistics in the Image Domain for Mobile Robot Environment Modeling L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent Machines School of.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Lecture 16: Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Robotics Club: 5:30 this evening
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Autonomous Robots Robot Path Planning (3) © Manfred Huber 2008.
Presented by: Idan Aharoni
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
Navigation Strategies for Exploring Indoor Environments Hector H Gonzalez-Banos and Jean-Claude Latombe The International Journal of Robotics Research.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Planning Tracking Motions for an Intelligent Virtual Camera Tsai-Yen Li & Tzong-Hann Yu Presented by Chris Varma May 22, 2002.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
CIVET seminar Presentation day: Presenter : Park, GilSoon.
Autonomous Robots Robot Path Planning (2) © Manfred Huber 2008.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Stereo CS4670 / 5670: Computer Vision Noah Snavely Single image stereogram, by Niklas EenNiklas Een.
Prof. Yu-Chee Tseng Department of Computer Science
Paper – Stephen Se, David Lowe, Jim Little
Vehicle Segmentation and Tracking in the Presence of Occlusions
Probabilistic Robotics
A Volumetric Method for Building Complex Models from Range Images
Stefan Oßwald, Philipp Karkowski, Maren Bennewitz
Presentation transcript:

View Planning Candidacy Exam Paul Blaer December 15, 2003

The View Planning Problem: Find set of sensor configurations to efficiently and accurately fulfill a reconstruction or inspection task. Positions often found sequentially so sometimes called the Next Best View (NBV) Problem

Tasks Inspection

Tasks: Inspection Surveillance

Tasks: Inspection Surveillance 3D Models of Smaller Objects

Tasks: Inspection Surveillance 3D Models of Smaller Objects 3D Models of Large Objects (such as buildings).

Tasks: Inspection Surveillance 3D Models of Smaller Objects 3D Models of Large Objects (such as buildings). Mapping for Mobile Robots

View Planning Literature 1. Model Based Methods Cowan and Kovesi, 1988 Tarabanis and Tsai, 1992 Tarabanis, et al, 1995 Tarbox and Gottschlich, 1995 Scott, Roth and Rivest, Non-Model Based Methods Volumetric Methods –Connolly, 1985 –Banta et al, 1995 –Massios and Fisher, 1998 –(Papadopoulos-Organos, 1997) –(Soucey, et al, 1998) Surface-Based Methods –Maver and Bajcsy, 1993 –(Yuan, 1995) –Zha, et al, 1997 –Pito, 1999 –Reed and Allen, 2000 –Klein and Sequeira, 2000 Whaite and Ferrie, Art Gallery Methods (Xie, et al, 1986) Gonzalez-Banos, et al, 1997 Danner and Kavraki, View Planning for Mobile Robots Gonzalez-Banos, et al, 2000 Grabowski, et al, 2003 Nuchter, et al, 2003

Typical View Planning Constraints Fundamental – Increase knowledge of the viewing volume. Scanning – Ensure that the viewing volume can be scanned. Overlap – Resample part of object already scanned and be able to ID that part. Tolerance – Sample the object with a minimum accuracy. Self Termination Computational Burden – Algorithm should be able to compute NBV in a computationally feasible amount of time Other constraints: Few assumptions Generalizable

“Automatic Sensor Placement from Vision Task Requirements,” C. K. Cowan and P. D. Kovesi, 1988 Find camera view points for inspecting a scene. Requirements: Resolution Constraint Focus Constraint Field of View Constraint Visibility Constraint View surface computed for each and intersected. Constraints Extend to Laser Scanners

“The MVP Sensor Planning System for Robotic Vision Tasks,” K. A. Tarabanis, R. Y. Tsai, and P. K. Allen 1995 Given CAD model of the scene and task requirements. Compute view to fulfill tasks. Requirements: Resolution Focus Constraint Field of View Constraint Feature Visibility Constraint – solved in “Computing Occlusion- Free Viewpoints,” Tarabanis and Tsai, 1992 Requirements written as inequalities. Optimization procedure run to maximize the quality of the viewpoints.

“Planning for Complete Sensor Coverage in Inspection,” G. H Tarbox and S. N. Gottschlich, 1995 “View Planning for Multistage Object Reconstruction,” W. R. Scott, G Roth and J.-F. Rivest, 2001 Model based approaches Camera and a laser with a fixed baseline. Measurability matrix, C(i,k), is computed. Tarbox and Gottschlich: Next view based on glancing angles and “difficulty to view.” Scott, Roth, and Rivest: Similar but add an incremental process and a constraint on sensor measurement error.

“The Determination of Next Best Views,” C. I. Connolly, 1985 “The “Best-Next-View” Algorithm for Three-Dimensional Scene Reconstruction Using Range Images,” J. E. Banta, et al., 1995 Connolly: Volumetric Model-Based Approach. No prior information. Volume stored as Octree, regions labeled empty, object surface or unknown. Sphere around object is discretized into view points NBV is selected by picking viewpoints that see the most unkown voxels. Banta, et al.: Similar to Connolly but voxels are only labeled as occupied or unoccupied. Views are chosen at points of maximum curvature on the object.

“Occlusions as a Guide for Planning the Next View,” J. Maver and R. Bajcsy, 1993 Occlusion based approach. No prior knowledge Camera-laser triangulation system. The planning is done in two stages: – Resolve occlusions from the laser stripe not being visible to the camera. Correct by rotating in scanning plane. – Resolve occlusions from the laser line not reaching parts of the scene. Correct by rotating the scanning plane itself.

More Occlusion Based Methods “Active Modeling of 3-D Objects: Planning on the Next Best Pose (NBP) for Acquiring Range Images,” H. Zha, K. Korooka, T. Hasegawa, and T. Nagata, 1997 NBV is computed by maximizing a linear combination of three weighted functions. Extending constraint for covering unexplored regions. Overlapping constraint for registration. Smoothness constraint for registration. “A Best Next View Selection Algorithm incorporating Quality Criterion” N. A. Massios and R. B. Fisher, 1998 Voxels partitioned as empty, unseen, seen, or occlusion plane. An occlusion planes are computed along jump edges. A quality criteria based on the difference between the incident angle of the scanner and the normal of the voxel being scanned. NBV’s are chosen to be in the direction of occlusion plane and also to maximize the quality of the voxels being imaged.

“A Solution to the Next Best View Problem for Automated Surface Acquisition,” R. Pito, 1999 No prior knowledge of the object. Void Volume stored as Void patches on the boundary. Observation Rays Computed From the Surface and projected into Positional Space. Potential Range Rays are projected into PS and collinear ORs are found. The NBV is scanner position that can view the most number of void patches while still viewing a threshold number of patches from the existing model.

“Constraint-Based Sensor Planning for Scene Modeling,” M. K. Reed and P. K. Allen, 2000 Constructs solid models from range imagery. No prior knowledge about the object is known Surface is tessellated surface from the range data and extruded to the bounding box. A surface is labeled as either imaged or occlusion. N largest targets by surface area are chosen and the set of positions from which the sensor can image the target is computed (the imaging set). A set of occlusion constraints are computed. Finally a set of possible views is computed by subtracting the occlusion constraints from the imaging set. The next view is chosen from that set. A new range image is incorporated into the model by intersecting it with the current model.

“Autonomous Exploration: Driven by Uncertainty,” P. Whaite and F. P. Ferrie, 1997 Autonomous Exploration with a Laser Range Scanner Approximates Target with Superellipsoids. Parameters are estimated and Uncertainty Ellipse is Found. NBV is selected in the direction of least certainty. Restricted to single Superellipsoid.

“View Planning for the 3D Modeling of Real World Scenes,” K. Klein, V. Sequeira, 2000 No prior knowledge of the object being scanned. Surface represented as two meshes, a known mesh and a void mesh which is the boundary between the known and unknown regions. A cost benefit ratio is computed: Benefit: how close is each point viewed to its desired sampling density, and how much void volume is viewed. Cost: how hard is it to get to that view point (manually computed). For calculation of the quality function at a given view point the mesh is partially rendered on to a view cube. A view is selected that has the best cost/benefit but maintains an overlap with known regions of at least 20%.

“Randomized Planning for Short Inspection Paths,” T. Danner and L. E. Kavraki, 2000 Danner and Kavraki: Extends the Gonzalez-Banos, et al.’s (1997) randomized art gallery method to 3-D scenes. The visibility volume of points on the surface is computed. Random points within volume are chosen. Points are iteratively added to cover more of the surface. An approximation of TSP is used to connect the points and form the path.

“Planning Robot Motion Strategies for Efficient Model Construction,” H. H. Gonzalez-Banos, et al., 2000 Goal: Construction of a 2D map of the environment Uses a Sick laser range sensor Takes a single scan and extracts polylines to represent the obstacles NBV is solved by randomly picking locations in the free space and estimating How much new information will be gained. Best location chosen by maximizing the new information gained and minimizing distance traveled.

“Autonomous Exploration via Regions of Interest”, R. Grabowski, P. Khosla, H. Choset, 2003 Goal to construct 2D map of environment with Sonars. Data is fused into a occupancy map. Measurements with a low separation angle are highly coupled. Therefore next best views are chosen that have poses are not highly coupled (higher separation angles). After a view is taken, the regions that can see the same feature, but from a different angle are marked as regions of interest.

“Planning Robot Motion for 3D Digitalization of Indoor Environments,” A. Nuchter, H. Surmann, J. Hertzberg, 2003 Goal to construct a 3D model of the environment with a Mobile Robot. Uses a pair of Sick laser scanners. Scans the ground plane and extracts straight lines, then adds “unseen lines” to close these lines off into a polygon that bounds the free space. NBV is chosen by randomly choosing views in the free space and evaluating how much of the unseen lines it can view. Views at a great distance and with a substantial change in angle are penalized.

Discussion Typical Model Acquisition Steps Steps are missing Older methods relied on a fixed and known sensor work space. Interest is moving toward mobile robot platforms and exploration of complex indoor and outdoor environments. In complex exploration tasks, many problems become interrelated: Localization Mapping Navigation and Path Planning Sensor Planning

Open Problems and Future Research Improve efficiency – to help with the move towards larger scenes Improve Accuracy and Robustness – as we move towards more unstructured environments, sensor error will increase. Develop online planning methods – take into account not only the changing model but the changing workspace of the sensor. Multisensor Fusion Approaches – be able to construct our models out of multiple inputs and plan views that take into account the constraints and benefits of more than just the single sensor.

“A Mechanism of Automatic 3D Object Modeling,” X. Yuan, 199 No Prior knowledge of object Object represented by surface patches. Mass Vector Chain (MVC) is computed List of weighted normal vectors for each surface patch. Because the Guassian Mass of a convex object is zero, the sum of the MVC should also be zero. Direction of unviewed patches by the direction of the MVC of the patches scanned so far.

“Uniform and Complete Surface Coverage with a Robot- Mounted Laser Rangefinder,” G. Soucey, F. G. Callari, F. P. Ferrie, 1998 No prior information. A stripe laser range finder is used. The scanner is swept across the object and the edge voxels tracked. Edge voxels are clustered to find the longest boundary of the surface. Assumption is that the region beyond the longest edge is going to be the largest region of unexplored space. Choose views that view that edge.

“Planning Views for the Incremental Construction of Body Models,” S. Xie, T. W. Calvert, and B. K. Bhattacharya, D map of the environment is assumed. Mobile robot’s goal is to construct 3-D models, but the 3- D world is projected into the 2-D plane. Simplifies to an art gallery problem. Two methods Shape of objects known Partitioned into simple polygons by connected edges of obstacles. Views are chosen by intersecting half-planes created by the edges of those simple polygons. Views are chosen greedily to cover as many edges as possible. Shape of objects unknown World is represented by the partial model and the Projected View Lines (PVLs). Views are picked within the polygons created by the edges of the partial model and the PVLs.

“Automatic 3D Digitization Using a Laser Rangefinder with a Small Field of View,” D. Papadopoulos-Organos and F. Schmitt, 1997 No prior information Uses a triangulation based 3-D laser scanner. The object as it is acquired is stored in as voxels. Two types of planning are used: Path Planning: Using only translations in a zigzag pattern, avoiding the object as it is detected. Sensor planning: The traditional view planning problem in which occlusions are resolved. Isn’t dealt with directly.