Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scenario and Integration in GRASP

Similar presentations


Presentation on theme: "Scenario and Integration in GRASP"— Presentation transcript:

1 Scenario and Integration in GRASP

2 Plan for day 2 Input/output definition for the scenario year 1
Tasks and responsibilities per person Initial cooperation plan (personnel exchange)

3 Some issues Scenario in GRASP Libraries in GRASP Vision in GRASP
Haptics in GRASP Hand models (robot and human) Objects in GRASP Knowledge representation in GRASP: objects, actions

4 Scenario Year 4: Empty a shopping basket Important aspects Suggestion
Demonstrate novel aspects in each WP Initial integration in WP7 Implementation of the control architecture (WP3) at UJI Suggestion Each Partner presents what he/she already can do! WP7: robot platforms in OpenRAVE

5 Scenario year 1 Handling some the of 8 objects on the table (not in the basket) WP1: observe a human grasping one of these objects and provide the tracked 3D model of the hand and a classification of the type of grasp Note: A grasp is defined by: Grasp type Grasp starting point Approaching direction Hand orientation WP1 (Heiner): provide kinematics of the grasps, grasping points and covert and overt attention (see Daniel’s presentation this afternoon)

6 Scenario year 1 Handling some the 8 objects on the table (not in the basket) WP2: Discrete mapping of observed human grasp activities to one/two hand robots. Extract DMPs from the observed movement, Representations for integration WP3: Demonstrate the grasping cycle using the grasp types form WP2, object type and attributes from WP4 at the UJI platform WP4: Background/foreground segmentation in the case of textured objects, grasp points generation, pose of the object (6D) (bounding boxes), Identify the primitive shapes (3D model fitting) WP5: definition of the expectation model necessary for detecting surprise

7 Scenario year 1 Handling some the 8 objects on the table (not in the basket) WP6: Initial version of the simulator Integration of COLLADA and PAL to OpenRAVE Reproduction of what have been demonstrated in WP1, Mapping from WP2, location form WP4 on ARMAR in OpenRAVE WP7: Proposal for integration for the described scenario with focus on how to represent objects, actions (MMM, DMPs), input/output definitions, OpenRAVE/MCA Reproduction of grasping cylindrical textured objects on ARMAR using the 6D pose from WP4

8 Objects (1) Object representations via meshes in all workpackages
Cylinder-like and box-like objects GRASP objects (8 items) Boxed salt (SFB 588), object ID 2 Cylindrical salt (SFB 588), object ID 3 Gauloises red Zwieback (SFB 588), object ID 11 Cups (Dani’s cups, i4280.JPG) Two different cups (two each to generate textured and no-textured) Complete representation of one object Meshes Stereo

9 Objects (2) Original monocular images (10 views) Darius
Stereo images (5 views) for Markus, Lech Stereo information (Markus, Lech) Internal and external calibration (Depth map for the 5 stereo views ) Who will provide the meshes for these objects (UniKarl) Geometrical models of the objects are also needed

10 Input-output for scenario year 1
WP1 Input: Human experiments from WP1 (LMU, Daniel)  FORTH Output: Grasp type (1 of 4) and approaching vector (FORTH, Antonis, Georgios) (Human grasping library) WP2 Input: Human Experiments from WP1 (LMU, Daniel) Grasp Ontology, i.e. hierarchy of human hand postures, discrete mapping to Barrett and Karlsruhe hand; approach vector relation to the objects in the database (KTH, OB, Dani, Dan and Thomas) Representation of humans grasps using DMPs (Martin, Tamim)

11 Input-output for scenario year 1
WP3: Input: Results form KTH (Output WP2), 6D pose of grasp objects (WP4, Chavdar), object type (WP4, Chavdar) Output: grasping cycle on the UIJ platform (UJI, Javier) and (LUT, Janne) WP4: Input: Stereo-images from UJI plus calibration data (UJI, Antonio) Output: 2.5 point cloud + mesh (TUW, Lech and Mario); 6D object pose estimation (TUM, Chavdar, TUW, Markus and Lech); remote distributed computing) WP5: Input: Ontology hierarchy from WP2 (Maria, Dan and Thomas), object and scene representation from WP4 (TUM, Darius) Output: Stereo sequences of humans manipulation

12 Input-output for scenario year 1
WP6 Input: Results from KTH (Output WP2), 6D pose of grasp objects (WP4, Chavdar), object type (WP4, Chavdar) ARMAR controller OpenRAVE Plugin (UniKarl, Stefan, Markus) Output: Execution of observed movements (WP1) in the Simulation (OpenRAVE in its original version) on ARMAR (Demonstration of collision detection for the introspection detection, i.e. collision with the other 7 objects on the table, UniKarl, Tamim) WP7: Input: grasp ontology (KTH), action representation (UniKarl, KTH), objects representations (TUW, Markus), IO from all other WPs (see above) ; models of prediction (WP2, WP5, ????) Output: Demonstrate integration of OpenRAVE and MCA (UniKarl, Stefan, Tamim) and the predict-act-perceive cycle on ARMAR (UniKarl, Markus, Tamim).

13 Group meeting Simulator (Antonio) Control group (Ville)
Stefan, Markus, Beatrix, Sami, Antonio, Alex Control group (Ville) Janne, Javier, Dan Representations of object, action and surprise (Dani) Thomas, Maria, Darius, Nikos, Markus, Tamim Human Observation (Antonis) Daniel, Heiner, Martin, Georgios Scene observation (Lech) Lech, Chavdar, Manuel,

14 Representations of object, action and surprise
COLLADA file ID, category, shape, mesh, weight, material, inertia, CoM Grasp types Action Vocabulary of actions Reach (6D pose) Pre-shape (grasp type) Grasp (approach vector, hand orientation, grip forces) Lift (move in Cartesian space) Transport (move in Cartesian space) Place (move in Cartesian space, contact force) Release (open hand) Representations of object, action, …. and surprise Object-action Complexes (OAC) Embodiment specific and embodiment invariant

15

16 Representations of object, action and surprise
The robot opens its eyes. Everything is background. Object-centered surprise detection and not robot-centered surprise detection Effect, Cause, Task, Agent (Robot, Human), Context, World

17 Needed libraries Human Grasps Library (HGL) Involved partners: LMU, FORTH, UniKarl, Otto Bock Robot Grasps Libraries for different robot hands (RGL) Involved partners: KTH, UJI, UniKarl, TUW, TUM, LUT Mapping between HGL and RGL Involved partners: KTH, UniKarl Object Library (database): Daily objects such as chocolate bar, apples, milk boxes, … Involved partners: UniKarl (object database) + all

18 First concept for the integration of the simulator
Discussed at the Karlsruhe Meeting in July (UniKarl, UJI, KTH, TUM, OttoBock)

19 Architectur of the simulator
Discussed at the Karlsruhe Meeting in July (UniKarl, UJI, KTH, TUM, OttoBock)


Download ppt "Scenario and Integration in GRASP"

Similar presentations


Ads by Google