Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time.

Slides:



Advertisements
Similar presentations
Motion Capture The process of recording movement and translating that movement onto a digital model Games Fast Animation Movies Bio Medical Analysis VR.
Advertisements

A Natural Interactive Game By Zak Wilson. Background This project was my second year group project at University and I have chosen it to present as it.
COMP322/S2000/L41 Classification of Robot Arms:by Control Method The Control unit is the brain of the robot. It contains the instructions that direct the.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Introduction To Tracking
Hybrid Position-Based Visual Servoing
Gerald Schweighofer RIGOROSUM Online SaM for GCM Institute of Electrical Measurement and Measurement Signal Processing Online Structure and.
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Neural Network Grasping Controller for Continuum Robots David Braganza, Darren M. Dawson, Ian D. Walker, and Nitendra Nath David Braganza, Darren M. Dawson,
Yiannis Demiris and Anthony Dearden By James Gilbert.
Oklahoma State University Generative Graphical Models for Maneuvering Object Tracking and Dynamics Analysis Xin Fan and Guoliang Fan Visual Computing and.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #2: The Big.
Vision for Robotics ir. Roel Pieters & dr. Dragan Kostić Section Dynamics and Control Dept. Mechanical Engineering {r.s.pieters, January.
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Monash University Dept Research Forum Active Sensing for Mobile and Humanoid Robots - Lindsay Kleeman Active Sensing for Mobile and Humanoid Robots.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Part 2 of 3: Bayesian Network and Dynamic Bayesian Network.
CS 326 A: Motion Planning Manipulation Planning.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Theoretical Foundations of Multimedia Chapter 3 Virtual Reality Devices Non interactive Slow image update rate Simple image Nonengaging content and presentation.
Research Overview A/Prof Lindsay Kleeman Intelligent Robotics Research Centre Monash University.
Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision IEEE Intelligent Transportation Systems 2009 M.S. Student,
Non-invasive Techniques for Human Fatigue Monitoring Qiang Ji Dept. of Electrical, Computer, and Systems Engineering Rensselaer Polytechnic Institute
Presented by Gal Peleg CSCI2950-Z, Brown University February 8, 2010 BY CHARLES C. KEMP, AARON EDSINGER, AND EDUARDO TORRES-JARA (March 2007) 1 IEEE Robotics.
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
A Brief Overview of Computer Vision Jinxiang Chai.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
GESTURE ANALYSIS SHESHADRI M. (07MCMC02) JAGADEESHWAR CH. (07MCMC07) Under the guidance of Prof. Bapi Raju.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
T. Bajd, M. Mihelj, J. Lenarčič, A. Stanovnik, M. Munih, Robotics, Springer, 2010 ROBOT CONTROL T. Bajd and M. Mihelj.
National Research Council Canada Conseil national de recherches Canada National Research Council Canada Conseil national de recherches Canada Institute.
Robotics Sharif In the name of Allah. Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #2: The.
Video Eyewear for Augmented Reality Presenter: Manjul Sharma Supervisor: Paul Calder.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Object Lesson: Discovering and Learning to Recognize Objects Object Lesson: Discovering and Learning to Recognize Objects – Paul Fitzpatrick – MIT CSAIL.
3D Object Modelling and Classification Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University,
Michael Isard and Andrew Blake, IJCV 1998 Presented by Wen Li Department of Computer Science & Engineering Texas A&M University.
The palm was created using a modular cavity design. It was designed using ProEngineer and printed using Rapid Prototype. The fingers were made using Polymorph.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Autonomous Robots Vision © Manfred Huber 2014.
Autonomous Navigation for Flying Robots Lecture 6.3: EKF Example
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
Team Members Ming-Chun Chang Lungisa Matshoba Steven Preston Supervisors Dr James Gain Dr Patrick Marais.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Spring 2007 COMP TUI 1 Computer Vision for Tangible User Interfaces.
Typical DOE environmental management robotics require a highly experienced human operator to remotely guide and control every joint movement. The human.
The first question is really "Why do you need a control system at all?” Consider the following: What good is an airplane if you are a pilot and you.
Robot Programming from Demonstration, Feedback and Transfer Yoan Mollard, Thibaut Munzer, Andrea Baisero, Marc Toussaint, Manuel Lopes.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
University of Pennsylvania 1 GRASP Control of Multiple Autonomous Robot Systems Vijay Kumar Camillo Taylor Aveek Das Guilherme Pereira John Spletzer GRASP.
First-person Teleoperation of Humanoid Robots
Chapter 11: Artificial Intelligence
Automation as the Subject of Mechanical Engineer’s interest
Computer Output Device: Arm Robot
Unified Modeling Language
Domo: Manipulation for Partner Robots Aaron Edsinger MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group
Manipulation in Human Environments
Jörg Stückler, imageMax Schwarz and Sven Behnke*
Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day Dr. Dimitrios Tzovaras.
Estimation of relative pose
Domo: Manipulation for Partner Robots Aaron Edsinger MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group
Optical flow and keypoint tracking
Stefan Oßwald, Philipp Karkowski, Maren Bennewitz
Presentation transcript:

Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time delays. For instance, the graph below plots the position of the head during a sinusoidal motion: the red line indicates joint encoder data, and the blue line shows data from the cameras. The apparent 30 ms delay between these devices can degrade Metalman’s dynamic performance. In this work, we develop simple matching and prediction techniques that allow Metalman to autonomously estimate and reduce these effects. Electrical and Computer Systems Engineering Postgraduate Student Research Forum 2001 Using position-based visual servoing, Metalman has the ability to perform simple manipulation tasks; the sequence below shows Metalman autonomously locating and stacking three randomly placed blocks. Future work will include servoing two arms cooperatively to perform even more complex tasks! Perception and Control in Humanoid Robotics using Vision Geoffrey Taylor Supervisors: A/Prof Lindsay Kleeman A/Prof R Andrew Russell Imagine you had a domestic humanoid robot servant, then consider what you would like it to do … It quickly becomes clear that a practical domestic robot must possess a basic ability to find and grasp objects in a dynamic, cluttered environment (ie. your house!). To address this issue, we have developed a self-calibrating, position-based visual servoing framework. Metalman, the Monash upper-torso humanoid robot, provides a platform for this and other exciting humanoid robot experiments. For more information, check the IRRC web page at It’s a visual thing … Visual servoing is a feedback control technique using visual measurements to robustly regulate the motion of a robot. Metalman uses stereo cameras to estimate the 3D pose (position and orientation) of its hands, by observing bright LEDs attached in a known pattern and feeding the data into a Kalman tracking filter. Other objects are similarly localized via attached coloured markers. Depending on the desired action (eg. grasp an object), Metalman uses this pose information to generate actuating signals that drive the arm to the required pose. Because Metalman continuously estimates the pose of its hands, the system is completely self-calibrating. Progress time indicated at top-right of each frame 160 s 80 s 0 s 20 s 35 s This is the actual stereo view seen by Metalman while tracking its hand LED markers on the hand facilitate pose tracking Biclops active head 3D hand pose measurement gives the relative position and orientation between hand and head Final hand pose depends on the desired action Metalman uses pose information to drive hand in desired direction 100 s Even robots get lonely! Metalman must interact with humans to be truly useful. The experiment below demonstrates simple interaction using motion cues: the user taps on a random block, and Metalman places a finger above the selected object.