Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots Alberto Romay, Stefan Kohlbrecher, David C. Conner,

Slides:



Advertisements
Similar presentations
Configuration Space. Recap Represent environments as graphs –Paths are connected vertices –Make assumption that robot is a point Need to be able to use.
Advertisements

Links and Joints.
Coordinate Systems Dr. Midori Kitagawa.
Kinematics & Grasping Need to know: Representing mechanism geometry Standard configurations Degrees of freedom Grippers and graspability conditions Goal.
From Teleoperators to Robots Development of machines and interfaces.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Animation Following “Advanced Animation and Rendering Techniques” (chapter 15+16) By Agata Przybyszewska.
3D Graphics for Game Programming (J. Han) Chapter XI Character Animation.
OLAP Services Business Intelligence Solutions. Agenda Definition of OLAP Types of OLAP Definition of Cube Definition of DMR Differences between Cube and.
Neural Network Grasping Controller for Continuum Robots David Braganza, Darren M. Dawson, Ian D. Walker, and Nitendra Nath David Braganza, Darren M. Dawson,
A Cloud-Assisted Design for Autonomous Driving Swarun Kumar Shyamnath Gollakota and Dina Katabi.
Soul Envoy Final Year Project 22nd April 2006 By Zhu Jinhao.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #2: The Big.
CPSC 425: Computer Vision (Jan-April 2007) David Lowe Prerequisites: 4 th year ability in CPSC Math 200 (Calculus III) Math 221 (Matrix Algebra: linear.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Robotics R&N: ch 25 based on material from Jean- Claude Latombe, Daphne Koller, Stuart Russell.
CSCE 641: Forward kinematics and inverse kinematics Jinxiang Chai.
MEAM 620 Project Report Nima Moshtagh.
CS274 Spring 01 Lecture 5 Copyright © Mark Meyer Lecture V Higher Level Motion Control CS274: Computer Animation and Simulation.
CSCE 689: Forward Kinematics and Inverse Kinematics
Introduction to mobile robots Slides modified from Maja Mataric’s CSCI445, USC.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Autonomous Robotics Team Autonomous Robotics Lab: Cooperative Control of a Three-Robot Formation Texas A&M University, College Station, TX Fall Presentations.
An Introduction to Robot Kinematics
Vision Guided Robotics
More details and examples on robot arms and kinematics
Integrated Astronaut Control System for EVA Penn State Mars Society RASC-AL 2003.
1 C01 – Advanced Robotics for Autonomous Manipulation Department of Mechanical EngineeringME 696 – Advanced Topics in Mechanical Engineering.
Definition of an Industrial Robot
Constraints-based Motion Planning for an Automatic, Flexible Laser Scanning Robotized Platform Th. Borangiu, A. Dogar, A. Dumitrache University Politehnica.
Optimization-Based Full Body Control for the DARPA Robotics Challenge Siyuan Feng Mar
Lecture 2: Introduction to Concepts in Robotics
Chapter 2 Robot Kinematics: Position Analysis
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Monitoring, Modelling, and Predicting with Real-Time Control Dr Ian Oppermann Director, CSIRO ICT Centre.
12 November 2009, UT Austin, CS Department Control of Humanoid Robots Luis Sentis, Ph.D. Personal robotics Guidance of gait.
1 Fundamentals of Robotics Linking perception to action 2. Motion of Rigid Bodies 南台科技大學電機工程系謝銘原.
Robotics Sharif In the name of Allah. Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #2: The.
Projective Virtual Reality Freund and Rossmann Summarized by Geb Thomas.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Hybrid-Structure Robot Design From the authors of Chang Gung University and Metal Industries R&D Center, Taiwan.
Kinematics. The function of a robot is to manipulate objects in its workspace. To manipulate objects means to cause them to move in a desired way (as.
Robotics Introduction. Etymology The Word Robot has its root in the Slavic languages and means worker, compulsory work, or drudgery. It was popularized.
Univ logo Research and Teaching using a Hydraulically-Actuated Nuclear Decommissioning Robot Craig West Supervisors: C. J. Taylor, S. Monk, A. Montazeri.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Robotics II Copyright Martin P. Aalund, Ph.D.
Optimal Path Planning Using the Minimum-Time Criterion by James Bobrow Guha Jayachandran April 29, 2002.
1 Power to the Edge Agility Focus and Convergence Adapting C2 to the 21 st Century presented to the Focus, Agility and Convergence Team Inaugural Meeting.
Fast SLAM Simultaneous Localization And Mapping using Particle Filter A geometric approach (as opposed to discretization approach)‏ Subhrajit Bhattacharya.
Chapter 4 Dynamic Analysis and Forces 4.1 INTRODUCTION In this chapters …….  The dynamics, related with accelerations, loads, masses and inertias. In.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
A Multi-Touch Display for Robotic Team Control
Introduction to Machine Learning, its potential usage in network area,
Kinematics 제어시스템 이론 및 실습 조현우
Spatcial Description & Transformation
Andreas Hermann, Felix Mauch, Sebastian Klemm, Arne Roennau
Paper – Stephen Se, David Lowe, Jim Little
Robotics and Automation Control
Euratom-Tekes Annual Fusion Seminar 2012 Janne Tuominen
Homogeneous Transformation Matrices
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Unsupervised Perceptual Rewards For Imitation Learning
Chapter 4 . Trajectory planning and Inverse kinematics
Presentation transcript:

Template-Based Manipulation in Unstructured Environments for Supervised Semi-Autonomous Humanoid Robots Alberto Romay, Stefan Kohlbrecher, David C. Conner, Alexander Stumpf, and Oskar Von Stryk Michelle Levine

Overview Problem-fully autonomous robots and purely teleoperated robots inefficient in unstructured environments Combination of Semi- Autonomy and Semi- Teleoperation through the use of template-based user interface UI allows operator to receive an aggregated worldmodel from onboard sensors and send back perceptual and semantic information from the operators

Weaknesses of Fully Autonomous Robots in Unstructured Environments need extensive databases of possible objects found highly efficient grasping algorithms ability to react to unforeseen circumstances

Weaknesses of Purely Teleoperated Robots Need near real-time feedback without disruptions Require transmission of large amounts of data to the operator

Related Work A lot of successful work on fully autonomous robots in structured environments – object recognition and mission planning becomes more of a challenge in unstructured environments Affordances, JJ Gibson: the possible actions that an object offers to an organism in the environment Object-Action Complexes (OACs), Kruger: define the relationships between objects and actions

Related Work continued Nagatani-work on teleoperated robots – used in hazardous environments – wired network – Example: robot Quince to Fukushima nuclear plant in 2011 Similar approach: – automatic template fitting algorithms rather than human assisted alignment – motion planning in control station rather than onboard the robot – affordance: operator manually rotates valve template rather than specify an action (open, close) with “+/- 360 degrees”

Strengths of this Approach Human strengths: – can work in discrete space – can easily identify objects of interest – quick decision making Robot strengths: – perform calculations and gather physical properties (mass and inertia)

Template-Based Communication Inspired by theory of affordances and OACs Fast communication and more accurate than just direct communication between operator and robot Use for rescue and recovery missions in disaster scenarios

Pipeline with Templates SensePlanWalkGraspUse

Object Templates-overview templates-shown as 3D meshes of the object meshes include – information like mass and center of mass – grasp templates, pre and final grasps and basic grasp types (cylindrical, prismatic, spherical) – possibilities of actions Operator overlaps 3D mesh over sensor data and can then estimate the real object’s pose and iterate through grasp templates before acting

Grasp Template where and how to grasp the object g = (H, E, N, S, P p, P f ) – H = {1, 0}, Lefthand = 1, Righthand = 0 – E = {cylindrical, prismatic, spherical}, type of grasp – N = vector of fingers joint values where fingers make contact with the object – S € R 2, 2D position of robot pelvis relative to template – pose P p € R 3 x SO, position and quaternion orientation of hand for pre-grasp – pose P f € R 3 x SO, position and quaternion orientation of hand for final-grasp

Object Template Definition x = (I, T, M, C, G, U) – I € N, ID number of object of interest – T € N, type of template (tools, debris, hose) – M € R, estimated mass – C € R 3, estimated CoM – G, list of potential grasp templates g – U = {T x, T y, T z, R x, R y, R z } € {0, 1} 6, six dimensional vector (3D translation and rotation), defines if action is possible over a dimension

Object Template Manipulation-World Model Use LIDAR and cameras to gather sensor data IMU on pelvis obtains pose estimate Different coordinate frames to reconstruct pointclouds Visualize all joint states, self filtering from sensor data, and collision avoidance

Planning Use sensor data to generate efficient motion plans Robot creates 3D Octomap representation to avoid collisions 2D grip map slices used for locomotion planning, to plan collision free footstep plan Providing Situational Awareness to Operator Down-sample and crop sensor data (e.g. 3D pointclouds, 2D images, and video) as needed

Pipeline with Templates SensePlanWalkGraspUse

Cartesian and Circular Path Planning Vector U follows Cartesian path between initial and final poses Waypoints created using linear interpolation Use spherical linear interpolations-end effector’s goal pose can be different from start effector’s orientation Circular motion- concatenate multiple short linearly interpolated Cartesian paths (can be designed to maintain end effector’s orientation) Video: om/watch?v=wKFJO- Zkjck

DARPA Challenge, Team ViGIR-Hose Task Video: vLeo

DARPA Challenge-Hose Task

Hose Task continued

Valve Task

Valve Task continued

Conclusions and Future Work Conclusions: – Can efficiently send semantic commands -> can plan missions on the fly – Useful for grasping and manipulating objects – Limitation-small objects require fine manipulation, need to invest time to align template – fastest team to win 2 points in Hose Task and fastest to turn the nozzle out of the only 2 teams that tried (better than 2 nd place team in the Hose Task) Future Work: – automatic template fitting algorithms – automatic grasp planners-create new grasps on the fly – expand from just constrained motion paths to provide forces necessary for those motions