Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.

Slides:



Advertisements
Similar presentations
1 Motion and Manipulation Configuration Space. Outline Motion Planning Configuration Space and Free Space Free Space Structure and Complexity.
Advertisements

Visual Servo Control Tutorial Part 1: Basic Approaches Chayatat Ratanasawanya December 2, 2009 Ref: Article by Francois Chaumette & Seth Hutchinson.
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
電腦視覺 Computer and Robot Vision I
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
Recovering Human Body Configurations: Combining Segmentation and Recognition Greg Mori, Xiaofeng Ren, and Jitentendra Malik (UC Berkeley) Alexei A. Efros.
Paper Presentation --- Grasping a novel object InInstructor: Student Name: Major: ID: Date: Marius C.Silaghi Sida Du M.E April
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Deadlock-Free and Collision- Free Coordination of Two Robot Manipulators Patrick A. O’Donnell and Tomás Lozano- Pérez by Guha Jayachandran Guha Jayachandran.
Motion Planning of Multi-Limbed Robots Subject to Equilibrium Constraints. Timothy Bretl Presented by Patrick Mihelich and Salik Syed.
A Study of Approaches for Object Recognition
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Presented By: Huy Nguyen Kevin Hufford
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Integrated Grasp and Motion Planning For grasping an object in a cluttered environment several tasks need to take place: 1.Finding a collision free path.
CSE473/573 – Stereo Correspondence
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Segmentation and tracking of the upper body model from range data with applications in hand gesture recognition Navin Goel Intel Corporation Department.
Geometric Probing with Light Beacons on Multiple Mobile Robots Sarah Bergbreiter CS287 Project Presentation May 1, 2002.
 For many years human being has been trying to recreate the complex mechanisms that human body forms & to copy or imitate human systems  As a result.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Computer vision.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
Class material vs. Lab material – Lab 2, 3 vs. 4,5, 6 BeagleBoard / TI / Digilent GoPro.
© Manfred Huber Autonomous Robots Robot Path Planning.
Recognition and Matching based on local invariant features Cordelia Schmid INRIA, Grenoble David Lowe Univ. of British Columbia.
CIS 601 Fall 2003 Introduction to Computer Vision Longin Jan Latecki Based on the lectures of Rolf Lakaemper and David Young.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
A Statistical Approach to Speed Up Ranking/Re-Ranking Hong-Ming Chen Advisor: Professor Shih-Fu Chang.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
NUS CS5247 Deadlock-Free and Collision-Free Coordination of Two Robot Manipulators By Patrick A. O’Donnell and Tomás Lozano-Pérez MIT Artificial Intelligence.
80 million tiny images: a large dataset for non-parametric object and scene recognition CS 4763 Multimedia Systems Spring 2008.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
A survey of different shape analysis techniques 1 A Survey of Different Shape Analysis Techniques -- Huang Nan.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
NUS CS5247 Dynamically-stable Motion Planning for Humanoid Robots Presenter Shen zhong Guan Feng 07/11/2003.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments P. Michel, J. Chestnutt, J. Kuffner, T. Kanade Carnegie Mellon University – Robotics.
Department of Computer Science Columbia University rax Dynamically-Stable Motion Planning for Humanoid Robots Paper Presentation James J. Kuffner,
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching Link: singhashwini.mesinghashwini.me.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Recognizing Deformable Shapes
Humanoid Robot In Our World Toward humanoid manipulation in human-centred environments Presented By Yu Gu (yg2466)
On Multi-Arm Manipulation Planning
SoC and FPGA Oriented High-quality Stereo Vision System
Feature description and matching
Common Classification Tasks
Feature descriptors and matching
Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner, Rudiger Dillmann.
Recognition and Matching based on local invariant features
Chapter 4 . Trajectory planning and Inverse kinematics
Stefan Oßwald, Philipp Karkowski, Maren Bennewitz
Presentation transcript:

Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann Presentation by Yixing Chen

OUTLINE Introduction The humanoid robot ARMAR-III Robot control architecture Collision-free motion planning Object recognition and localization Programing of grasping and manipulation tasks

Introduction Why do we create the humanoid robot? What’s the requirements for humanoid robot in human-centered environment? How can humanoid robot work in human- centered environment?

The humanoid robot ARMAR-III Video:

Ability: Deal with the household environment Deal with a wide variety of objects Deal with the different activities Configuration: Head – eyes, vision system Upper body – arm and hand, help to grasp Mobile platform – maintain stability, move

Seven subsystem: head, left arm, right arm, left hand, right hand, torso and a mobile platform.

All configuration should be similar to human body.

Robot control architecture The task planning level Specify the subtasks for the multiple subsystems of the robot, this level represents the highest level with functions of task representation and is responsible for the scheduling of tasks and management of resources and skills. The task coordination level Activates sequential/parallel actions for the execution level in order to achieve the given task goal. The task execution level Characterized by control theory to execute specified sensory-motor control commands.

Collision-free motion planning Multiresolutional planning system Low resolution – path planning algorithm for mobile platform or rough hand movement. Faster. High resolution – path planning algorithm for complex hand movement, such as dexterous manipulation and grasping. Slower. Guarantee collision-free path Using rapidly exploring tree (RRT). Using enlarged robot models The enlarged models are constructed by slightly scaling up the convex 3D models of the robot so that the minimum distance between the surfaces of the original and the enlarged model reaches a lower bounding d freespace. So we can apply this method without any distance computation and speed up the algorithm.

Collision-free motion planning Lazy collision checking Robot is very complex in shape. so even though we find the collision-free path and path points, the path segment between the points may result in collision. So we use the lazy collision checking to decouple the collision checks for C-space samples and path-segments, this method can speed up the process of finding path. First step: the normal sampling-based RRT algorithm searches a solution path in the C-space Second step: we use the enlarged model approach to check the collision status of the path segments of the solution path. If a path segment between two configurations ci and ci+1 fails during the collision test, we try to create a local detour by starting a subplanner which searches a way around the Cspace obstacle

Result

Object recognition and localization To work in a household environment, the robot must be able to recognize the objects and localized them with a high enough accuracy for grasping. In this part, we will introduce the recognition and localization based on shape and on texture. recognition and localization based on shape: The objects are colored objects. Simplify the problem of segmentation, let robot concentrate on complicated tasks such as the filling and empty of the dishwasher. Combine the appearance-based method, model-based method and stereo vision. recognition and localization based on texture The objects are textured objects such as box of food, more complex in recognition.

recognition and localization based on shape Segmentation Perform color segmentation in HSV color space. Use stereo vison, the properties of result blob are represented by bounding box, centroid and number of pixels. Region processing pipeline Use Principle Component Analysis, normalized the region in size. Resize the region to a squared window of 64*64 pixels. 6D localization Six dimensional space – varying position and orientation. Calculate the position and orientation independently: -Estimate of position is calculated by triangulating the centroid of the color blob. -Estimate of orientation is retrieved from the database for the matched view.

recognition and localization based on shape Typical result of a scene analysis. Input image of the left camera (left) and 3D visualization of the recognition and localization result (right).

recognition and localization based on texture Feature calculation -Different views of the same image patch around a feature point vary, so we cannot simply correlate the image. -The descriptors are computed on the base of a local planar assumption. -SIFT (Scale-invariant feature transform) is the best way to find the feature of images. -The feature information is about the position(u, v), rotation angle φ, and feature vector{x j }. 2D localization Get the correspondences between the view and training image.

recognition and localization based on texture Correspondences between view of the scene and training image. The left picture is the view, and the right picture is the training image. The blue box illustrates the result of 2D localization.

recognition and localization based on texture 6D localization Calculate the pose base on the correspondences between 3D model coordinates and image coordinates. And to improve the accuracy, make use of the calibrated stereo system to calculate depth information with maximum accuracy. -Determine highly textured points with the calculated 2D contour of the object in the left camera image -Determine correspondences with subpixel-accuracy in the right camera image -Calculate a 3D point for each correspondence -Fit a 3D plane into the calculated 3D point cloud -Calculate the intersections of the four 3D lines through the 3D corners in the left image with the 3D plane.

recognition and localization based on texture Recognition and localization for boxes and cups. The left picture is the view (input image), and the right picture is the 3D visualization pf the result.

Programming of Grasping and manipulation work The central idea: The existence of a database with 3D models of all the objects encountered in the robot workspace and a 3D model of the robot hand. Integrated grasp planning system -The global model database: Contains the CAD models of all objects and a set of feasible grasps for each object -The offline grasp analyser: use the models of objects and hand to compute a set of stable grasps. -A online visual procedure to identify objects in stereo images: match the features of images with the prebuilt model of objects, then determines the location and pose. After localizing the object, the robot select the grasp type for the object from the set of stable grasps.

Integrated grasp planning system

Offline grasp analysis To ensure the accuracy of the grasp, in our approach, given an object, the grasp will be described by these features. Grasp type Grasp starting point (GSP) Grasp center point (GCP) Approaching direction Hand orientation

Offline grasp analysis grasp type It determines the grasp execution control such as the hand preshape posture, the control strategy, which fingers are used in the grasp and so on.

Grasp video Let’s watch a grasp video. watch?v=QYEJJA52wG8 And the whole work process video. watch?v=87cbivmjfe8

Summary This paper introduced: A humanoid robot consisting of an active head for vision system, two arms with five-fingered hands, a torso and a holonomic platform. An integrated system of grasping and manipulation tasks in humanoid robot. The system incorporates a vision system for the recognition and localization of objects, a path planner for the generation of collision- free trajectories and an offline grasp analyser that provides the most feasible grasp configuration for each object.

Thank You!