System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.

Slides:



Advertisements
Similar presentations
Evidential modeling for pose estimation Fabio Cuzzolin, Ruggero Frezza Computer Science Department UCLA.
Advertisements

RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
Joydeep Biswas, Manuela Veloso
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Hybrid Position-Based Visual Servoing
Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Multiple People Detection and Tracking with Occlusion Presenter: Feifei Huo Supervisor: Dr. Emile A. Hendriks Dr. A. H. J. Stijn Oomes Information and.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Sam Pfister, Stergios Roumeliotis, Joel Burdick
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Vision-Based Motion Control of Robots
MULTI-TARGET TRACKING THROUGH OPPORTUNISTIC CAMERA CONTROL IN A RESOURCE CONSTRAINED MULTIMODAL SENSOR NETWORK Jayanth Nayak, Luis Gonzalez-Argueta, Bi.
Monash University Dept Research Forum Active Sensing for Mobile and Humanoid Robots - Lindsay Kleeman Active Sensing for Mobile and Humanoid Robots.
3D Mapping Robots Intelligent Robotics School of Computer Science Jeremy Wyatt James Walker.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Research Overview A/Prof Lindsay Kleeman Intelligent Robotics Research Centre Monash University.
 For many years human being has been trying to recreate the complex mechanisms that human body forms & to copy or imitate human systems  As a result.
00/4/103DVIP-011 Part Three: Descriptions of 3-D Objects and Scenes.
Vision Guided Robotics
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
A Brief Overview of Computer Vision Jinxiang Chai.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Autonomous Learning of Object Models on Mobile Robots Xiang Li Ph.D. student supervised by Dr. Mohan Sridharan Stochastic Estimation and Autonomous Robotics.
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Hand Gesture Recognition System for HCI and Sign Language Interfaces Cem Keskin Ayşe Naz Erkan Furkan Kıraç Özge Güler Lale Akarun.
A General Framework for Tracking Multiple People from a Moving Camera
Xiaoguang Han Department of Computer Science Probation talk – D Human Reconstruction from Sparse Uncalibrated Views.
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
November 10, 2004 Prof. Christopher Rasmussen Lab web page: vision.cis.udel.edu.
National Research Council Canada Conseil national de recherches Canada National Research Council Canada Conseil national de recherches Canada Institute.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.
The University of Texas at Austin Vision-Based Pedestrian Detection for Driving Assistance Marco Perez.
COMP322/S2000/L261 Geometric and Physical Models of Objects Geometric Models l defined as the spatial information (i.e. dimension, volume, shape) of objects.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Tutorial Visual Perception Towards Computer Vision
3D Object Modelling and Classification Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University,
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
Announcements Final is Thursday, March 18, 10:30-12:20 –MGH 287 Sample final out today.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Manipulation in Human Environments
Processing visual information for Computer Vision
Action-Grounded Push Affordance Bootstrapping of Unknown Objects
Scenario and Integration in GRASP
Andreas Hermann, Felix Mauch, Sebastian Klemm, Arne Roennau
Shape Recovery Using Robust Light Striping
Manipulation in Human Environments
Jörg Stückler, imageMax Schwarz and Sven Behnke*
Vehicle Segmentation and Tracking in the Presence of Occlusions
Filtering Things to take away from this lecture An image as a function
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Filtering An image as a function Digital vs. continuous images
Presentation transcript:

System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception and Robotic Manipulation Springer Tracts in Advanced Robotics Chapter 7 Geoffrey Taylor Lindsay Kleeman

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 2 Overview Stereoscopic light stripe scanning Object Modelling and Classification Multicue tracking (edges, texture, colour) Visual servoing Real-world experimental manipulation tasks with an upper-torso humanoid robot

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 3 Motivation To enable a humanoid robot to perform manipulation tasks in a domestic environment: –A domestic helper for the elderly and disabled Key challenges: –Ad hoc tasks with unknown objects –Robustness to measurement noise/interference –Robustness to calibration errors –Interaction to resolve ambiguities –Real-time operation

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 4 Architecture

5 Light Stripe Scanning Triangulation-based depth measurement. Stripe generator Camera Scanned object B D

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 6 Stereo Stripe Scanner Three independent measurements provide redundancy for validation. Left camera L Scanned object 2b Right camera R Laser diode X xLxL xRxR Left image plane Right image plane θ

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 7 Reflections/Cross Talk

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 8 Single Camera Result Single camera scannerRobust stereoscopic scanner

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 9 3D Object Modelling Want to find objects with minimal prior knowledge. –Use geometric primitives to represent objects Segment 3D scan based on local surface shape. Surface type classification

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 10 Segmentation Fit plane, sphere, cylinder and cone to segments. Merge segments to improve fit of primitives. Raw scan Final segmentation Surface type classification Geometric models

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 11 Object Classification Scene described by adjacency graph of primitives. Objects described by known sub-graphs.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 12 Modeling Results Box, ball and cup: Raw colour/range scanTextured polygonal models

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 13 Multi-Cue Tracking Individual cues are only robust under limited conditions: –Edges fail in low contrast, distracted by texture –Textures not always available, distracted by reflections –Colour gives only partial pose Fusion of multiple cues provides robust tracking in unpredictable conditions.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 14 Tracking Framework 3D Model-based tracking: models modelled from light stripe range data. Colour (selector), edges and texture (trackers) are measured simultaneously in every frame. Measurements fused in Extended Kalman filter: –Cues interact with state through measurement models –Individual cues need not recover the complete pose –Extensible to any cues/cameras for which a measurement model exists.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 15 Colour Cues Filter created from colour histogram in ROI: –Foreground colours promoted in histogram –Background colours supressed in histogram Captured image used to generate filter Output of resulting filter

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 16 Edge Cues Combine with colour to get silhouette edges Sobel mask directional edges Fitted edges Predicted projected edges

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 17 Texture Cues Rendered predictionFeature detectorMatched templates Outlier rejectionFinal matched features

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 18 Tracking Result

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 19 Visual Servoing Position-based 3D visual servoing (IROS 2004). Fusion of visual and kinematic measurements.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 20 Visual Servoing 6D pose of hand estimated using extended Kalman filter with visual and kinematic measurements. State vector also includes hand-eye transformation and camera model parameters for calibration.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 21 Grasping Task Grasp a yellow box without prior knowledge of objects in the scene.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 22 Grasping Task

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 23 Pouring Task Pour the contents of a cup into a bowl.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 24 Pouring Task

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 25 Smell Experiment Fusion of vision, smell and airflow sensing to locate and grasp a cup containing ethanol.

Taylor and Kleeman, Visual Perception and Robotic Manipulation, Springer Tracts in Advanced Robotics 26 Summary Integration of stereoscopic light stripe sensing, geometric object modelling, multi-cue tracking and visual servoing allows robot to perform ad hoc tasks with unknown objects. Suggested directions for future research: –Integrate tactile and force sensing –Cooperative visual servoing of both arms –Interact with objects to learn and refine models –Verbal and gestural human-machine interaction