Better Vision through Experimental Manipulation

Slides:



Advertisements
Similar presentations
Perception and Perspective in Robotics Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group Goal To build.
Advertisements

The DayOne project: how far can a robot develop in 24 hours? Paul Fitzpatrick MIT CSAIL.
Introduction To Tracking
3. Introduction to Digital Image Analysis
Evaluating the Quality of Image Synthesis and Analysis Techniques Matthew O. Ward Computer Science Department Worcester Polytechnic Institute.
MIRROR Project Review IST Brussels – December 1 st, 2003.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
EE465: Introduction to Digital Image Processing Copyright Xin Li 1 Introduction What is image segmentation?  Technically speaking, image segmentation.
From Controlled to Natural Settings
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
Optimality in Motor Control By : Shahab Vahdat Seminar of Human Motor Control Spring 2007.
A Review of Children, Humanoid Robots and Caregivers (Arsenio, 2004) COM3240 – Week 3 Presented by Gizdem Akdur.
The Whole World in Your Hand: Active and Interactive Segmentation The Whole World in Your Hand: Active and Interactive Segmentation – Artur Arsenio, Paul.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Part 6: Graphics Output Primitives (4) 1.  Another useful construct,besides points, straight line segments, and curves for describing components of a.
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
Vision & Recognition. From a different direction At different times, particularly if it has been modified in the interval In different light A particular.
ENT 273 Object Recognition and Feature Detection Hema C.R.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Dr. Gallimore10/18/20151 Cognitive Issues in VR Chapter 13 Wickens & Baker.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Action as Space-Time Shapes
12/7/10 Looking Back, Moving Forward Computational Photography Derek Hoiem, University of Illinois Photo Credit Lee Cullivan.
Interactive Learning of the Acoustic Properties of Objects by a Robot
Object Lesson: Discovering and Learning to Recognize Objects Object Lesson: Discovering and Learning to Recognize Objects – Paul Fitzpatrick – MIT CSAIL.
EXAMPLE 1 Find the probability of A or B
1 Challenges visual perception auditory perception speech perception lack of invariance no distinct segments lightness contrast pitch depends on loudness.
Evaluating Perceptual Cue Reliabilities Robert Jacobs Department of Brain and Cognitive Sciences University of Rochester.
Exploiting cross-modal rhythm for robot perception of objects Artur M. Arsenio Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Give examples of the way that virtual reality can be used in Psychology.
Giorgio Metta · Paul Fitzpatrick Humanoid Robotics Group MIT AI Lab Better Vision through Manipulation Manipulation Manipulation.
Robotics/Machine Vision Robert Love, Venkat Jayaraman July 17, 2008 SSTP Seminar – Lecture 7.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT.
AN ACTIVE VISION APPROACH TO OBJECT SEGMENTATION – Paul Fitzpatrick – MIT CSAIL.
Ali Ghadirzadeh, Atsuto Maki, Mårten Björkman Sept 28- Oct Hamburg Germany Presented by Jen-Fang Chang 1.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
Manipulation in Human Environments
9/30/ Cognitive Robotics1 Gestalt Perception Cognitive Robotics David S. Touretzky & Ethan Tira-Thompson Carnegie Mellon Spring 2006.
Harinath Doodhi, Eugene A. Katrukha, Lukas C. Kapitein, Anna Akhmanova 
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Realization of Dynamic Walking of Biped Humanoid Robot
Gestalt Perception Cognitive Robotics David S. Touretzky &
Modeling the development of mirror neurons Problem Solution
Chapter 4 Sensory Contributions to Skilled Performance
Manipulation in Human Environments
What is Projectile Motion?
Life in the Humanoid Robotics Group MIT AI Lab
Opportunistic Perception
11-1: Area of Rectangles and Squares
Perception and Perspective in Robotics
Sensing with the Motor Cortex
Representing Moving Images with Layers
Learning about Objects
Volume 60, Issue 4, Pages (November 2008)
Paul Fitzpatrick Humanoid Robotics Group MIT AI Lab
Harinath Doodhi, Eugene A. Katrukha, Lukas C. Kapitein, Anna Akhmanova 
Register.
Visual Motion and the Perception of Surface Material
Volume 22, Issue 14, Pages (July 2012)
The Neural Basis of Perceptual Learning
Pieter R. Roelfsema, Henk Spekreijse  Neuron 
Innate and learned components of human visual preference
Experimental Design Experiments Observational Studies
Topological Signatures For Fast Mobility Analysis
Learning complex visual concepts
Detecting Motion Pattern in Optical Flow Fields
Presentation transcript:

Better Vision through Experimental Manipulation Giorgio Metta, Paul Fitzpatrick • MIT AI Lab, Humanoid Robotics Group • {pasa,paulfitz}@ai.mit.edu Acknowledgments: This work is funded by DARPA as part of the “Natural Tasking of Robots Based on Human Interaction Cues” project under contract number DABT 63-00-C-10102, and by the Nippon Telegraph and Telephone Corporation as part of the NTT/MIT Collaboration Agreement. Our Goal To investigate the development of the association between visual information and motor commands in the learning, representation, and understanding of manipulative gestures. A practical problem For manipulation, we need to know what parts of the environment are physically coherent ensembles. This is a difficult judgement to make from purely visual information, as illustrated in the figure below. Locate arm from motion Use motion signature to detect arm and filter out distractors Learn to predict arm location Relate arm location to proprioceptive feedback Detect contact events At moment of impact, there is a characteristic, discontinuous spread of perceived motion Typical results 63 consecutive proddings of the cube, illustrating the frequency and types of error encountered. 1 2 3 5 4 6 Maximum Segmented regions Optical flow Segment impacted objects Differentiate motion of arm from that of the object to reveal the object’s boundary Edges of table and cube overlap Color of cube and table are poorly separated Cube has misleading surface pattern Maybe some cruel grad-student glued the cube to the table Problem Robot fixates area of interest Sweeps area to discover object boundaries Solution 10 20 30 40 50 60 70 80 90 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 difference between angle of motion and principal axis of object [degrees] estimated probability of occurrence Exploring an affordance: objects that roll Experimentation by robot reveals that certain objects (a bottle, a toy car) have a preferred direction of motion relative to the principal axis of their shape. The objects are clustered online. Bottle, “pointiness”=0.13 Car, “pointiness”=0.07 Ball, “pointiness”=0.02 Cube, “pointiness”=0.03 Rolls at right angles to principal axis Rolls along Our solution Use poking and prodding to solve the figure/ground problem experimentally, in the following steps: Training phase Active segmentation impact motion segmentation “side tap” “back slap”