L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation.

Slides:



Advertisements
Similar presentations
Chapter 4: The Visual Cortex and Beyond
Advertisements

Electrophysiology of Visual Attention. Does Visual Attention Modulate Visual Evoked Potentials? The theory is that Visual Attention modulates visual information.
Thrust ID: Peer-to-Peer HRI Training and Learning with Humans Rod Grupen (lead) Cynthia Breazeal Nicholas Roy MURI 8 Kickoff Meeting 2007.
Analysis Modeling.
Perception and Perspective in Robotics Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group Goal To build.
Patch to the Future: Unsupervised Visual Prediction
Where has all the data gone? In a complex system such as Metalman, the interaction of various components can generate unwanted dynamics such as dead time.
Covert Attention Mariel Velez What is attention? Attention is the ability to select objects of interest from the surrounding environment Involuntary.
Yiannis Demiris and Anthony Dearden By James Gilbert.
SA-1 Body Scheme Learning Through Self-Perception Jürgen Sturm, Christian Plagemann, Wolfram Burgard.
From Perception to Action And what’s in between?.
Laboratory for Perceptual Robotics – Department of Computer Science Hierarchical Mechanisms for Robot Programming Shiraj Sen Stephen Hart Rod Grupen Laboratory.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE A Relational Representation for Procedural.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Generalized Grasping and Manipulation Laboratory.
Laboratory for Perceptual Robotics Department of Computer Science University of Massachusetts Amherst Natural Task Decomposition with Intrinsic Potential.
Mirror Neurons “Thus I regard Rizzolati's discovery [of mirror neurons] — and my purely speculative conjectures on their key role in our evolution.
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
MIRROR Project Review IST Brussels – December 1 st, 2003.
Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics Presentation by Dan Hartmann 21 Feb 2006.
Summer 2011 Wednesday, 8/3. Biological Approaches to Understanding the Mind Connectionism is not the only approach to understanding the mind that draws.
Robotics for Intelligent Environments
Motor cortical areas: the homunculus The motor system.
 For many years human being has been trying to recreate the complex mechanisms that human body forms & to copy or imitate human systems  As a result.
Sociable Machines Cynthia Breazeal MIT Media Lab Robotic Presence Group.
Function Approximation for Imitation Learning in Humanoid Robots Rajesh P. N. Rao Dept of Computer Science and Engineering University of Washington,
Andrew H. Fagg: Symbiotic Computing Laboratory 1.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
2 2  Background  Vision in Human Brain  Efficient Coding Theory  Motivation  Natural Pictures  Methodology  Statistical Characteristics  Models.
K. J. O’Hara AMRS: Behavior Recognition and Opponent Modeling Oct Behavior Recognition and Opponent Modeling in Autonomous Multi-Robot Systems.
Sensorimotor systems Chapters 8.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Learning Prospective Robot Behavior Shichao.
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
Designing for Attention With Sound: Challenges and Extensions to Ecological Interface Design Marcus O. Watson and Penelope M. Sanderson HUMAN FACTORS,
1 USC INFORMATION SCIENCES INSTITUTE CALO, 8/8/03 Acquiring advice (that may use complex expressions) and action specifications Acquiring planning advice,
Intention Detection and Mirror Neurons
Chapter 2.2 Game Design. CS Overview This introduction covers: –Terms –Concepts –Approach All from a workaday viewpoint.
Visual Perception, Attention & Action. Anthony J Greene2.
Developed by Reneta Barneva, SUNY Fredonia for CSIT 425 Requirements Modeling.
Bayesian goal inference in action observation by co-operating agents EU-IST-FP6 Proj. nr Raymond H. Cuijpers Project: Joint-Action Science and.
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
Autism Presented by : Hosein Hamdi. Autism manifests during the first three years of life Genetic factors play a significant and complex role in autism.
Interactive Learning of the Acoustic Properties of Objects by a Robot
Modeling and Imagery: Intro Wilson & Knoblich, 2005.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
Chapter 1. Cognitive Systems Introduction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Park, Sae-Rom Lee, Woo-Jin Statistical.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
U SER I NTERFACE L ABORATORY Situation Awareness a state of knowledge, from the processes used to achieve that state (situation assessment) not encompass.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Are you looking in the mirror or out the window? Pausing Pausing Paraphrasing Paraphrasing Probing for specificity Probing for specificity Putting ideas.
Introduction to Psychology Culture and Identity Prof. Jan Lauwereyns
Detecting and Tracking Hostile Plans in the Hats World.
Jivko Sinapov, Kaijen Hsiao and Radu Bogdan Rusu Proprioceptive Perception for Object Weight Classification.
Feel the beat: using cross-modal rhythm to integrate perception of objects, others, and self Paul Fitzpatrick and Artur M. Arsenio CSAIL, MIT.
Robot Programming from Demonstration, Feedback and Transfer Yoan Mollard, Thibaut Munzer, Andrea Baisero, Marc Toussaint, Manuel Lopes.
Science and Engineering Practices K–2 Condensed Practices3–5 Condensed Practices6–8 Condensed Practices9–12 Condensed Practices Developing and Using Models.
Does the brain compute confidence estimates about decisions?
Visual Recognition of Human Movement Style Frank E. Pollick Department of Psychology University of Glasgow.
V k equals the vector difference between the object and the block across the first and last frames in the image sequence or more formally: Toward Learning.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
CIRP Annals - Manufacturing Technology 60 (2011) 1–4 Augmented assembly technologies based on 3D bare-hand interaction S.K. Ong (2)*, Z.B. Wang Mechanical.
A Bayesian Model of Imitation in Infants and Robots
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Manipulation in Human Environments
Developing systems with advanced perception, cognition, and interaction capabilities for learning a robotic assembly in one day Dr. Dimitrios Tzovaras.
2-DOF Manipulator Now, given the joint angles Ө1, Ө2 we can determine the end effecter coordinates x and y.
Neural and Computational Mechanisms of Action Processing: Interaction between Visual and Motor Representations  Martin A. Giese, Giacomo Rizzolatti  Neuron 
Unsupervised Perceptual Rewards For Imitation Learning
Genalin Lagman Taguiam Spring
Presentation transcript:

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation Learning in Humanoid Robots Andrew Fagg, Rod Grupen, Mike Rosenstein, and John Sweeney UMass Amherst NEMS 2005

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Programming a Humanoid is Hard Complex mechanisms: many DOF, many sensor streams. Programming by demonstration: –Demonstrator performs task, robot extracts salient knowledge to reproduce across many instances. Notables: [Pook & Ballard 93], [Kuniyoshi et al. 94], [Voyles et al. 99], and [Ijspeert et al. 02] Imitation: –Imitation learning augments stochastic exploration for acquiring control knowledge.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE On Imitation Assume that demonstrator is performing a goal-directed behavior. Kinematic properties of demonstration are not important to us. –Can refine using robot specific objectives. Interested in the work conveyed by demonstration. –How the objects are manipulated, and what sequence, etc.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Why Intention? Infer the goal and recognize the scene, and the behavior can be successfully reproduced. –We have some domain specific knowledge. Intention is compatible across morphologies. Recognize more from less. –More abstract representations of actions.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Demonstration by Teleoperation Direct access to joint velocities and tactile information. No difficulty with correspondence. Difficulty of teleoperation: –Minimal feedback, communication delays. –Fatigue. –Discrepancy between human and robot observational frames.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE A Robot Teleoperation Interface NASA/JSC Telepresence System

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Video

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE How to infer intent?

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Mirror Neurons [Rizzolatti et al. 01] What this suggests: Action generation and perception are initimately related: use controller as sensor!

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Controller as Sensor Set of controllers defined by objects in the scene. Compare what demonstrator does to what each controller would do. –“Control Projection” Use domain knowledge to inform when meaningful events occur. –Pay attention to tactile events!

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Set of Primitive Controllers Each controller represents one identified affordance in the scene. –Domain: objects on a table in front of the robot: cans, beanbags, and targets. AFFORDANCE A functional matching between object and actor; described by particular perceptual features. [Gibson 77]

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE The Robot’s View Recognized Affordances: –Type of grasp: top, side –DOF constraints: don’t care about rot. about Z –Every object has a previous place

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE From Scene to Controllers Object Models Affordances Controllers

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Extracting a Sequence of Actions Set of controllers represent hypotheses of intention. Observable variables: controller errors, force magnitude at fingertips. Controller i explains sequence of observations if: –Each step in sequence reduces error –Error at end is small –Finishes with tactile event Use Bayesian inference to infer most likely.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Controller Primitives Define Cartesian controller i: Error Joint Command Reference

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Determining Likelihood Error given by distance between joint commands: Compute likelihood at time t:

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE An Extracted Sequence

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE An Extracted Sequence

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Playing Back the Sequence Most likely controller at tactile event is recorded. Extracted sequence makes reference to affordances in relation to specific objects. –Can rearrange scene: just find correspondence between objects. –Simple visual models used.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Further More elaborate model of activity –Look at controller error and change in error. More elaborate representations of task –Hierarchical –Example: using a tool, building a structure. Identify affordances from interaction. –Find visual features that predict affordances. –Categorization Relational models describe object interaction –How objects can interact depend on identity.

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE

Related Work Teleoperation activity recognition [Pook & Ballard 93] Block stacking imitation [Kuniyoshi et al. 94] Gesture-Based Programming [Voyles et al. 99] Movement Imitation [Ijspeert et al. 02]

L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Mirror Neurons –Area of ventral premotor cortex in primates disocovered by Rizzolatti et al. [1996] that fire when monkey performs grasps and observes others perform grasp.