PixelLaser: Range from texture

Slides:



Advertisements
Similar presentations
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
Advertisements

Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
1.Introduction 2.Article [1] Real Time Motion Capture Using a Single TOF Camera (2010) 3.Article [2] Real Time Human Pose Recognition In Parts Using a.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker.
Self-Supervised Segmentation of River Scenes Supreeth Achar *, Bharath Sankaran ‡, Stephen Nuske *, Sebastian Scherer *, Sanjiv Singh * * ‡
High Speed Obstacle Avoidance using Monocular Vision and Reinforcement Learning Jeff Michels Ashutosh Saxena Andrew Y. Ng Stanford University ICML 2005.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
The Planar-Reflective Symmetry Transform Princeton University.
Anchoring AI via Robots and ROS A. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14.
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
PixelLaser: Range scans from image segmentation Nicole Lesperance ’11 Michael Leece ’11 Steve Matsumoto ’12 Max Korbel ’13 Kenny Lei ’15 Zach Dodds ‘62.
Supervised Learning and k Nearest Neighbors Business Intelligence for Managers.
Autonomous Surface Navigation Platform Michael Baxter Angel Berrocal Brandon Groff.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
CS 5 for all! Bridgette Eichelburger ’14, David Lingenbrink ‘14, Yael Mayer ‘11, Obosa Obazuaye ‘14, Becca Thomas ‘14, Maia Valcarce ‘13, Joshua Vasquez.
Robotics at HMC: Summer 2007 Vedika Khemani '10, Rachel ArceJaeger '10, Jessica Wen '10, Morgan Conbere '08, Lilia Markham '08, Cord Melton '09 (UChicago),
BUILDING EXTRACTION AND POPULATION MAPPING USING HIGH RESOLUTION IMAGES Serkan Ural, Ejaz Hussain, Jie Shan, Associate Professor Presented at the Indiana.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Comparison of Inertial Profiler Measurements with Leveling and 3D Laser Scanning Abby Chin and Michael J. Olsen Oregon State University Road Profile Users.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Robotics Education Track 2:45 Preview of the PIRE challenge Paul Oh and Doug Blank 3:15 PixelLaser M Korbel, M Leece, K Lei, N Lesperance, S Matsumoto,
Tapia Robotics 2009: Import Antigravity Kate Burgers '11, Becky Green '11, Sabreen Lakhani '11, Pam Strom '11 and Zachary Dodds In the summer of 2008,
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
One reason for this is that curricular resources for robot mapping are scarce. This work fills the gap between research code, e.g., at openslam.org, and.
高精度高速度的三维测量技术. 3D stereo camera Stereo line scan camera 3D data and color image simultaneously 3D-PIXA.
Demosaicking for Multispectral Filter Array (MSFA)
Stereo Vision Local Map Alignment for Robot Environment Mapping Computer Vision Center Dept. Ciències de la Computació UAB Ricardo Toledo Morales (CVC)
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Accessible Aerial Autonomy via ROS Nick Berezny ’12, Lilian de Greef ‘12, Brad Jensen ‘13, Kimberly Sheely ‘12, Malen Sok ‘13, and Zachary Dodds Tasks.
MATTING. REMEMBER… Matt Board is expensive. Measure twice, cut once. Lay your matt board on something clean… on newsprint for example, NOT the floor!
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
EE368 Final Project Spring 2003
Multiple Organ detection in CT Volumes Using Random Forests
School of Computing Clemson University Fall, 2012
A Plane-Based Approach to Mondrian Stereo Matching
Strategies for Spatial Joins
Predicting Visual Search Targets via Eye Tracking Data
Action-Grounded Push Affordance Bootstrapping of Unknown Objects
Compact Bilinear Pooling
Ananya Das Christman CS311 Fall 2016
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
M. Kuhn, P. Hopchev, M. Ferro-Luzzi
Fitting: Voting and the Hough Transform
Semi-Global Matching with self-adjusting penalties
Instance Based Learning
Efficient Image Classification on Vertically Decomposed Data
PRM based Protein Folding
Recognizing Deformable Shapes
Yun-FuLiu Jing-MingGuo Che-HaoChang
PixelLaser: Range from texture
Common Classification Tasks
Efficient Image Classification on Vertically Decomposed Data
K Nearest Neighbor Classification
Agenda Motivation. Components. Deep Learning Approach.
What Is Spectral Imaging? An Introduction
Image Based Modeling and Rendering (PI: Malik)
RGB-D Image for Scene Recognition by Jiaqi Guo
Basic Sensors – Laser Distance Sensor
Day 33 Range Sensor Models 12/10/2018.
Instance Based Learning
Viability of vanishing point navigation for mobile devices
Chap 8. Instance Based Learning
Face Detection in Color Images
Pallet Detection and Engagement
Nearest Neighbors CSC 576: Data Mining.
Topological Signatures For Fast Mobility Analysis
Reuben Feinman Research advised by Brenden Lake
Spatial statistics of X-ray volumes reveal layering and spatially diverse distribution of cell bodies. Spatial statistics of X-ray volumes reveal layering.
Presentation transcript:

PixelLaser: Range from texture ISVC '10 11/30/2010 Las Vegas, NV Max Korbel ’13, Michael Leece ’11, Kenny Lei, Nicole Lesperance ’12, Steve Matsumoto ’12, and Zachary Dodds Motivation Pipeline Application From their earliest days (e.g., Horswill’s Polly) robots have used image segmentation to estimate which way to steer next, i.e., the general traversability of the terrain ahead. This project pushes segmentations one step further – to build range scans similar to laser range finders (LRFs). Our approach seeks to make LRFs’ large body of mapping, localization, and navigation algorithms available to a much wider audience through low-cost platforms. These image-segmentation scans can then serve as the basis for off-the-shelf spatial reasoning algorithms such as localization and mapping. Scans from Segments 1 The transformation from segmentation to distance depends on the height, angle, and internal geometry of the camera. Rather than calibrate, we empirically fit a function mapping from image height to range-to-obstacle. Classification We use nearest-neighbors classification on small image patches to determine traversable from untraversable texture. A comparison of the color and texture filters, shown at left, has guided the selection of image-patch descriptors. Training: a training image and the patches indexed in the Kd-tree. Blue (red) patches are (un)traversable. Plot of range vs. row Image descriptors and their redundancy Mapping Our Python port of CoreSLAM yields maps of a quality the same as the original authors’. Original Image Just RGB Statistics RGB and Texture Filter Coreslam results from the “playpen” Classification: the nearest neighbors of one patch and the overall results of classifying a novel image. Examples of classified patches Segmentation Localization Each image is segmented via a multi-resolution search for the bottommost transition from traversable to untraversable texture. We are investigating genetic-algorithm approaches to find the patch-descriptor weights that best segment our images. Currently the most expensive piece of the procedure is the nearest-neighbor lookup within the large K-d tree of remembered patches. Our implementation of Monte Carlo Localization using image-segmentation-based scans shows their power and promise. Platform Elementary! This project uses a netbook with OpenCV and Python atop an iRobot Create. The robot is robust and flexible enough to be our primary outreach platform, too. Note that a LRF would cost many times more than this entire platform! PixelLaser-based MCL Acknowledgments Segmentation: we run at several resolutions to search for transitions in terrain traversability. Range scan: the resulting range scan, shown here as it would look in a top-down view. We gratefully acknowledge support by The Rose Hills Foundation, Baker Foundation, the NSF projects REU #0753306, CPATH #0939149, and funds from HMC.