Scenario and Integration in GRASP

Slides:



Advertisements
Similar presentations
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Advertisements

RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
B659: Principles of Intelligent Robot Motion Spring 2013 David Tidd.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Neural Network Grasping Controller for Continuum Robots David Braganza, Darren M. Dawson, Ian D. Walker, and Nitendra Nath David Braganza, Darren M. Dawson,
Yiannis Demiris and Anthony Dearden By James Gilbert.
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
KAIST CS780 Topics in Interactive Computer Graphics : Crowd Simulation A Task Definition Language for Virtual Agents WSCG’03 Spyros Vosinakis, Themis Panayiotopoulos.
COMP322/S2000/L121 Workspace Analysis Work Space: Let q min and q max be the vectors in R n denoting the joint variable limits, the set of all values that.
Gaze Awareness for Videoconferencing: A Software Approach Nicolas Werro.
Chapter 10: Architectural Design
3D Models and Matching representations for 3D object models
3D Concepts Coordinate Systems Coordinates specify points in space 3D coords commonly use X, Y, & Z A vertex is a 'corner' of an object Different coordinate.
Institute of Systems and Robotics ISR – Coimbra Mobile Robotics Laboratory 3D Hand Trajectory Segmentation by Curvatures and Hand Orientation for Classification.
FP OntoGrid: Paving the way for Knowledgeable Grid Services and Systems WP8: Use case 1: Quality Analysis for Satellite Missions.
Joint International Master Project Dennis Böck & Dirk C. Aumueller 1.
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra 3D Hand Trajectory Segmentation by Curvatures and Hand Orientation for Classification.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
MESA LAB Two papers in icfda14 Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of California,
The aim of GRASP is the design of a cognitive system capable of performing grasping and manipulation tasks in open-ended environments, dealing with novelty,
Emergence of Cognitive Grasping through Emulation, Introspection and Surprise GRASP EUl 7 th Framework Program GRASP Emergence of Cognitive Grasping through.
Visual Scene Understanding (CS 598) Derek Hoiem Course Number: Instructor: Derek Hoiem Room: Siebel Center 1109 Class Time: Tuesday and Thursday.
Augmenting Physical State Prediction Through Structured Activity Inference Nam Vo & Aaron Bobick ICRA 2015.
An Ontological Framework for Web Service Processes By Claus Pahl and Ronan Barrett.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Deformable object registration/fitting (Chavo, TUM) Grasp selection (Beatriz, UJI) Differently scaled object model DB (Walter, TUW) Classification and.
Sean M. Ficht.  Problem Definition  Previous Work  Methods & Theory  Results.
Chapter 1. Cognitive Systems Introduction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Park, Sae-Rom Lee, Woo-Jin Statistical.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Knowledge Systems Lab JN 1/15/2016 Facilitating User Interaction with Complex Systems via Hand Gesture Recognition MCIS Department Knowledge Systems Laboratory.
Page 1 PACS GRITS 17 June 2011 Herschel Data Analysis Guerilla Style: Keeping flexibility in a system with long development cycles Bernhard Schulz NASA.
Collaborative Grasp Planning with Multiple Object Representations Peter Brook Matei Ciocarlie Kaijen Hsiao.
Rescue Robots A social relevant application Arnoud Visser DOAS Kick-off 7 January 2008.
Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots Alberto Romay, Stefan Kohlbrecher, David C. Conner,
Shape2Pose: Human Centric Shape Analysis CMPT888 Vladimir G. Kim Siddhartha Chaudhuri Leonidas Guibas Thomas Funkhouser Stanford University Princeton University.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
Statistical environment representation to support navigation of mobile robots in unstructured environments Sumare workshop Stefan Rolfes Maria.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
A Plane-Based Approach to Mondrian Stereo Matching
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Ullman's Visual Routines and Tekkotsu Sketches
A cloud-based platform for IFC file enrichment with
WP2 INERTIA Distributed Multi-Agent Based Framework
Alexis Maldonado & Georg Bartels
GRASP Management Meeting March Munich
Andreas Hermann, Felix Mauch, Sebastian Klemm, Arne Roennau
Recognizing Deformable Shapes
Modularization and Semantics of Learning Objects in a Cooperative Knowledge Space Nadine Ludwig Center for Multimedia in eLearning & eResearch, Berlin.
Ch. 29: Predetermined Time Systems
Announcements Homework 6 due Friday 11:59pm.
Datamining : Refers to extracting or mining knowledge from large amounts of data Applications : Market Analysis Fraud Detection Customer Retention Production.
Online Shopping APP.
CIS 488/588 Bruce R. Maxim UM-Dearborn
3D Models and Matching particular matching techniques
Robust Belief-based Execution of Manipulation Programs
11/12/2015 Team HARP (Abhishek, Alex, Feroze, Lekha, Rick)
Statistical environment representation to support navigation of mobile robots in unstructured environments Stefan Rolfes Maria Joao Rendas
3rd Studierstube Workshop TU Wien
Robo sapiens = robo prospectus
ESS VIP ICT Project Task Force Meeting 5-6 March 2013.
Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner, Rudiger Dillmann.
Human-object interaction
Stefan Oßwald, Philipp Karkowski, Maren Bennewitz
Occlusion and smoothness probabilities in 3D cluttered scenes
Chapter 9 System Control
Implementation Plan system integration required for each iteration
Presentation transcript:

Scenario and Integration in GRASP

Plan for day 2 Input/output definition for the scenario year 1 Tasks and responsibilities per person Initial cooperation plan (personnel exchange)

Some issues Scenario in GRASP Libraries in GRASP Vision in GRASP Haptics in GRASP Hand models (robot and human) Objects in GRASP Knowledge representation in GRASP: objects, actions

Scenario Year 4: Empty a shopping basket Important aspects Suggestion Demonstrate novel aspects in each WP Initial integration in WP7 Implementation of the control architecture (WP3) at UJI Suggestion Each Partner presents what he/she already can do! WP7: robot platforms in OpenRAVE

Scenario year 1 Handling some the of 8 objects on the table (not in the basket) WP1: observe a human grasping one of these objects and provide the tracked 3D model of the hand and a classification of the type of grasp Note: A grasp is defined by: Grasp type Grasp starting point Approaching direction Hand orientation WP1 (Heiner): provide kinematics of the grasps, grasping points and covert and overt attention (see Daniel’s presentation this afternoon)

Scenario year 1 Handling some the 8 objects on the table (not in the basket) WP2: Discrete mapping of observed human grasp activities to one/two hand robots. Extract DMPs from the observed movement, Representations for integration WP3: Demonstrate the grasping cycle using the grasp types form WP2, object type and attributes from WP4 at the UJI platform WP4: Background/foreground segmentation in the case of textured objects, grasp points generation, pose of the object (6D) (bounding boxes), Identify the primitive shapes (3D model fitting) WP5: definition of the expectation model necessary for detecting surprise

Scenario year 1 Handling some the 8 objects on the table (not in the basket) WP6: Initial version of the simulator Integration of COLLADA and PAL to OpenRAVE Reproduction of what have been demonstrated in WP1, Mapping from WP2, location form WP4 on ARMAR in OpenRAVE WP7: Proposal for integration for the described scenario with focus on how to represent objects, actions (MMM, DMPs), input/output definitions, OpenRAVE/MCA Reproduction of grasping cylindrical textured objects on ARMAR using the 6D pose from WP4

Objects (1) Object representations via meshes in all workpackages Cylinder-like and box-like objects GRASP objects (8 items) Boxed salt (SFB 588), object ID 2 Cylindrical salt (SFB 588), object ID 3 Gauloises red Zwieback (SFB 588), object ID 11 Cups (Dani’s cups, i4280.JPG) Two different cups (two each to generate textured and no-textured) Complete representation of one object Meshes Stereo

Objects (2) Original monocular images (10 views) Darius Stereo images (5 views) for Markus, Lech Stereo information (Markus, Lech) Internal and external calibration (Depth map for the 5 stereo views ) Who will provide the meshes for these objects (UniKarl) Geometrical models of the objects are also needed

Input-output for scenario year 1 WP1 Input: Human experiments from WP1 (LMU, Daniel)  FORTH Output: Grasp type (1 of 4) and approaching vector (FORTH, Antonis, Georgios) (Human grasping library) WP2 Input: Human Experiments from WP1 (LMU, Daniel) Grasp Ontology, i.e. hierarchy of human hand postures, discrete mapping to Barrett and Karlsruhe hand; approach vector relation to the objects in the database (KTH, OB, Dani, Dan and Thomas) Representation of humans grasps using DMPs (Martin, Tamim)

Input-output for scenario year 1 WP3: Input: Results form KTH (Output WP2), 6D pose of grasp objects (WP4, Chavdar), object type (WP4, Chavdar) Output: grasping cycle on the UIJ platform (UJI, Javier) and (LUT, Janne) WP4: Input: Stereo-images from UJI plus calibration data (UJI, Antonio) Output: 2.5 point cloud + mesh (TUW, Lech and Mario); 6D object pose estimation (TUM, Chavdar, TUW, Markus and Lech); remote distributed computing) WP5: Input: Ontology hierarchy from WP2 (Maria, Dan and Thomas), object and scene representation from WP4 (TUM, Darius) Output: Stereo sequences of humans manipulation

Input-output for scenario year 1 WP6 Input: Results from KTH (Output WP2), 6D pose of grasp objects (WP4, Chavdar), object type (WP4, Chavdar) ARMAR controller OpenRAVE Plugin (UniKarl, Stefan, Markus) Output: Execution of observed movements (WP1) in the Simulation (OpenRAVE in its original version) on ARMAR (Demonstration of collision detection for the introspection detection, i.e. collision with the other 7 objects on the table, UniKarl, Tamim) WP7: Input: grasp ontology (KTH), action representation (UniKarl, KTH), objects representations (TUW, Markus), IO from all other WPs (see above) ; models of prediction (WP2, WP5, ????) Output: Demonstrate integration of OpenRAVE and MCA (UniKarl, Stefan, Tamim) and the predict-act-perceive cycle on ARMAR (UniKarl, Markus, Tamim).

Group meeting Simulator (Antonio) Control group (Ville) Stefan, Markus, Beatrix, Sami, Antonio, Alex Control group (Ville) Janne, Javier, Dan Representations of object, action and surprise (Dani) Thomas, Maria, Darius, Nikos, Markus, Tamim Human Observation (Antonis) Daniel, Heiner, Martin, Georgios Scene observation (Lech) Lech, Chavdar, Manuel,

Representations of object, action and surprise COLLADA file ID, category, shape, mesh, weight, material, inertia, CoM Grasp types Action Vocabulary of actions Reach (6D pose) Pre-shape (grasp type) Grasp (approach vector, hand orientation, grip forces) Lift (move in Cartesian space) Transport (move in Cartesian space) Place (move in Cartesian space, contact force) Release (open hand) Representations of object, action, …. and surprise Object-action Complexes (OAC) Embodiment specific and embodiment invariant

Representations of object, action and surprise The robot opens its eyes. Everything is background. Object-centered surprise detection and not robot-centered surprise detection Effect, Cause, Task, Agent (Robot, Human), Context, World

Needed libraries Human Grasps Library (HGL) Involved partners: LMU, FORTH, UniKarl, Otto Bock Robot Grasps Libraries for different robot hands (RGL) Involved partners: KTH, UJI, UniKarl, TUW, TUM, LUT Mapping between HGL and RGL Involved partners: KTH, UniKarl Object Library (database): Daily objects such as chocolate bar, apples, milk boxes, … Involved partners: UniKarl (object database) + all

First concept for the integration of the simulator Discussed at the Karlsruhe Meeting in July (UniKarl, UJI, KTH, TUM, OttoBock)

Architectur of the simulator Discussed at the Karlsruhe Meeting in July (UniKarl, UJI, KTH, TUM, OttoBock)