Download presentation
Presentation is loading. Please wait.
Published byGyles Lyons Modified over 6 years ago
1
Eye in hand: Towards GPU accelerated online grasp planning based on pointclouds from in-hand sensor
Andreas Hermann, Felix Mauch, Sebastian Klemm, Arne Roennau Presented by Beatrice Liang
2
Overview and Motivation
Use in-hand depth cameras + GPU based collision detection algorithms for grasp planning on the fly Targets anthropomatic multi-fingered hands with complex kinematics and geometries Service robots have multifunctional hands Multiple joints Numerous Degrees of Freedom Schunk SVH Hand with PMD Nano Depth Sensor
3
Robotic Hands Generally have 5 to 20 active degrees of freedom to perform grasp SCHUNK SVH hand 20 DOF actuated by 9 motors
4
Grasp Planning Simulate contact between fingers and grasped object to find appropriate joint angles Databases to store precomputed grasps for known objects Define grasps for geometric primitives and fit primitives to visible parts of target object Fit a set of spheres into detected objects Estimate backside of objects by mirroring visible part at the shadow edge
5
Hand-Eye Calibration Sense-Plan-Act Cycle Visual Servoing
Object is Perceived Grasp is Planned Grasp is Executed without sensory input Visual Servoing Haptic Grasping
6
Proposed Method Visual exploration via in-hand camera to generate object models Highly parallelized algorithms Individual finger specific motion planning Compatible with further tactile or force based refinement Don’t require a meshed based surface representation
7
Implementation
8
GPU based Collision Detection with GPU-Voxels
Voxel based collision detection GPU Octrees, Voxelmaps, and Voxellists Models consist of dense pointclouds Volumetric representations of motions Voxels can be processed independently of each other Pinch-Grasp-Swept-Volume
9
3D Data Acquisition
10
Offline Grasp Rendering
Generate Swept-Volumes for every supported grasp Grasps are defined by the joints’ start/end angles and by the ratio of their coupling For each grasp there are N = 5 animated DOF K = 250 IDs (limited by memory restrictions The size of identifiable intervals per finger motion :
11
Sensor Data Processing
Used exact extrinsic calibration Accumulate measurements in a probabilistic 3D Octree Avoided surface reconstruction (algorithm does not require mesh representations) Stitched output fed to tabletop segmentation algorithms Produce pointcloud representation of object’s surface
12
Suppress grasps in unknown regions
13
Transformations into virtual workspace
14
Optimization problem Input Dimensions:
Geometrical transformation between object & hand (6 DOF) Joint angles of N fingers Object Geometry Hand geometry
15
Grasp Planning Reward Function
Grasps:
16
Hybrid Particle Swarm Optimization (PSO)
Particle describes translation and rotation of object in relation to hand Optimization Problem: Grasp Function: Hybrid Optimization Approach where
17
Grasp Model Processing
Choose if precision or power grasps should be planned Evaluate grasp Object pointcloud transformed into stretched out hand Pull object out of hand until there are no more collisions Intersect precalcuated Swept Volume with object Angle of fingers at first collision:
18
Evaluation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.