Pointing Based Object Localization CS223b Final Project Stanford University Bio-Robotics Lab Paul Nangeroni & Ashley Wellman March 17, 2008.

Slides:



Advertisements
Similar presentations
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Advertisements

Correcting Projector Distortions on Planar Screens via Homography
3D Object Recognition Pipeline Kurt Konolige, Radu Rusu, Victor Eruhmov, Suat Gedikli Willow Garage Stefan Holzer, Stefan Hinterstoisser TUM Morgan Quigley,
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Hybrid Position-Based Visual Servoing
Cambridge, Massachusetts Pose Estimation in Heavy Clutter using a Multi-Flash Camera Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Rama Chellappa, Amit.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Saad M. Khan and Mubarak Shah, PAMI, VOL. 31, NO. 3, MARCH 2009, Donguk Seo
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Lecture 8: Stereo.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Precise Omnidirectional Camera Calibration Dennis Strelow, Jeffrey Mishler, David Koes, and Sanjiv Singh.
Stereopsis Mark Twain at Pool Table", no date, UCR Museum of Photography.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Spacecraft Stereo Imaging Systems Group S3. Variables Separation of the cameras Height of the cameras – relative to the bench Angle – The direction cameras.
Image Stitching and Panoramas
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Detecting Patterns So far Specific patterns (eyes) Generally useful patterns (edges) Also (new) “Interesting” distinctive patterns ( No specific pattern:
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Visual Screen: Transforming an Ordinary Screen into a Touch Screen Zhengyou Zhang & Ying Shan Vision Technology Group Microsoft Research
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Geometric Probing with Light Beacons on Multiple Mobile Robots Sarah Bergbreiter CS287 Project Presentation May 1, 2002.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Stereo vision A brief introduction Máté István MSc Informatics.
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
EL-E: Assistive Mobile Manipulator David Lattanzi Dept. of Civil and Environmental Engineering.
Computer vision: models, learning and inference
Real-Time High Resolution Photogrammetry John Morris, Georgy Gimel’farb and Patrice Delmas CITR, Tamaki Campus, University of Auckland.
3D Fingertip and Palm Tracking in Depth Image Sequences
What we didn’t have time for CS664 Lecture 26 Thursday 12/02/04 Some slides c/o Dan Huttenlocher, Stefano Soatto, Sebastian Thrun.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alexander Norton Advisor: Dr. Huggins April 26, 2012 Senior Capstone Project Final Presentation.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
IMAGE MOSAICING Summer School on Document Image Processing
Hand Tracking for Virtual Object Manipulation
Rotation Invariant Neural-Network Based Face Detection
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
COMP322/S2000/L261 Geometric and Physical Models of Objects Geometric Models l defined as the spatial information (i.e. dimension, volume, shape) of objects.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Computer Vision, Robert Pless
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
stereo Outline : Remind class of 3d geometry Introduction
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
Presenter : Jia-Hao Syu
Visual Odometry David Nister, CVPR 2004
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
EE368: Digital Image Processing Bernd Girod Leahy, p.1/15 Face Detection on Similar Color Images Scott Leahy EE368, Stanford University May 30, 2003.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
Action-Grounded Push Affordance Bootstrapping of Unknown Objects
Paper – Stephen Se, David Lowe, Jim Little
Jure Zbontar, Yann LeCun
Approximate Models for Fast and Accurate Epipolar Geometry Estimation
Features Readings All is Vanity, by C. Allan Gilbert,
Using Clustering to Make Prediction Intervals For Neural Networks
Presentation transcript:

Pointing Based Object Localization CS223b Final Project Stanford University Bio-Robotics Lab Paul Nangeroni & Ashley Wellman March 17, 2008

( Motivation ) Present robotic object detection relies on dense stereo mapping of 3D environments Pointing based object localization is an intuitive interface for improving accuracy of object detectors Project represents several advances over prior art – Uses actual human line of sight (eye through fingertip) – Works in cluttered background – Detects objects in free space. March 17, 2008 Stanford University Bio-Robotics Lab 2

( Approach: Face Detection )

( Approach: Stereopsis ) Step 1: Warp Images along epilines of eye and fingertip in left image Step 2: Use NCC along epilines to find the matching eye and fingertip in right image Step 3: Project eye and fingertip locations into 3D Step 4: Resolve errors in projection via least squares Step 5: Create line of sight vector. - object known to exist on that line March 17, 2008 Stanford University Bio-Robotics Lab 4

( Approach: Stereopsis ) Step 6: Reproject actual eye and fingertip positions back into 2D Step 7: Rotate images along line of sight and create a slice from the fingertip to the edge of the image Step 8: Apply SIFT and RANSAC to the slice Step 9: locate the target object by selecting the match point closest to the centerline of the slice Step 10: Project the point into 3D and find the closest point along the known line of sight. This point is the location of the target object RANSAC pt Target Object NCC pts Reprojected Pts SIFT matches RANSAC matches Minimum norm from line of sight March 17, 2008 Stanford University Bio-Robotics Lab 5

( Results + Future Work ) Conclusions World coordinates output from stereo accurate to within 3cm at range of 2.5m Face and finger detection needs more training Object localization sensitive to background clutter Object location often at edge or corner rather than centroid of object itself Future Work Object location used to center high resolution close-up for improved accuracy and efficiency Laser will highlight target object before robotic arm attempts to grasp March 17, 2008 Stanford University Bio-Robotics Lab 6

( Breakdown of work ) Paul (60%) – Stereo Calibration, Stereopsis, Object Localization Ashley (40%) – Eye Detection, Fingertip Detection March 17, 2008 Stanford University Bio-Robotics Lab 7