Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Bruce Blumberg, MIT Media Lab.

Slides:



Advertisements
Similar presentations
Object Persistence for Synthetic Characters Damian Isla Bungie Studios Microsoft Corp. Bruce Blumberg Synthetic Characters MIT Media Lab.
Advertisements

Henry Lieberman MIT Media Lab AI in End-User Software Engineering Henry Lieberman, MIT (presenting) Raphael Hoffmann, Michael Toomim, U. Washington, Seattle.
Mirror Neurons Jessica Nyberg, B.S..
Lasting Relationship Cornerstones of Social Robotics in HRI Teamwork Social LearningSocial Intelligence Interdependence Transparent Communication Cognitive.
CSCTR Session 11 Dana Retová.  Start bottom-up  Create cognition based on sensori-motor interaction ◦ Cohen et al. (1996) – Building a baby ◦ Cohen.
Learning from how dogs learn Prof. Bruce Blumberg The Media Lab, MIT Prof. Bruce Blumberg The Media Lab, MIT.
Perception and Perspective in Robotics Paul Fitzpatrick MIT Computer Science and Artificial Intelligence Laboratory Humanoid Robotics Group Goal To build.
Perception and Pattern Recognition  What types of information do we use to perceive the world correctly?  What are the major theories about how we recognize.
Automating Graph-Based Motion Synthesis Lucas Kovar Michael Gleicher University of Wisconsin-Madison.
Integrated Learning for Interactive Characters Bruce Blumberg, Marc Downie, Yuri Ivanov, Matt Berlin, Michael P. Johnson, Bill Tomlinson.
COMP 175 | COMPUTER GRAPHICS Remco Chang1/3608 – Animation Lecture 08: Animation COMP 175: Computer Graphics March 10, 2015.
Yiannis Demiris and Anthony Dearden By James Gilbert.
 In its most basic form, learning is perceptual classification  Perceptual classification involves judging or believing that an observed object, x, falls.
Precomputed Search Trees: Planning for Interactive Goal-Driven Animation Manfred Lau and James Kuffner Carnegie Mellon University.
Animation From Motion Capture Motion Capture Assisted Animation: Texturing and Synthesis Kathy Pullen Chris Bregler Motion Capture Assisted Animation:
Types of Perceptual Processes Bottom-up - work up from sensory info. Top-down - apply knowledge and experience.
Human-robot interaction Michal de Vries. Humanoid robots as cooperative partners for people Breazeal, Brooks, Gray, Hoffman, Kidd, Lee, Lieberman, Lockerd.
ISTD 2003, Thoughts and Emotions Interactive Systems Technical Design Seminar work: Thoughts & Emotions Saija Gronroos Mika Rautanen Juha Sunnari.
1cs426-winter-2008 Notes  Text: End of 7.8 discusses flocking 7.13 discusses skinning 7.10 discusses motion capture  Remember online course evaluations.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation.
Behavior Planning for Character Animation Manfred Lau and James Kuffner Carnegie Mellon University.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Sensory-Motor Primitives as a Basis for Imitation: Linking Perception to Action and Biology to Robotics Presentation by Dan Hartmann 21 Feb 2006.
Precursors to theory of mind? Deciding whether something is animate or inanimate Potential Cues to animacy –Action at a distance –Self-propelled –Biological.
Sociable Machines Cynthia Breazeal MIT Media Lab Robotic Presence Group.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Attribution and media representations. Outline of attribution theory Human beings want to understand the world – Evolutionary advantages Events and human.
AS Level – Week 21 Theory Module 1 Information Processing Whiting & Welford.
Institute of Perception, Action and Behaviour (IPAB) Director: Prof. Sethu Vijayakumar.
An Architecture for Empathic Agents. Abstract Architecture Planning + Coping Deliberated Actions Agent in the World Body Speech Facial expressions Effectors.
/09/dji-phantom-crashes-into- canadian-lake/
Master’s programme Game and Media Technology. 10/1/20152 General Information:  Gaming and multimedia are booming industry  Increased use of gaming as.
Computer Science CPSC 322 Lecture 3 AI Applications 1.
Motion Modeling for Online Locomotion Synthesis Taesoo Kwon and Sung Yong Shin KAIST.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
Big Idea 1: The Practice of Science Description A: Scientific inquiry is a multifaceted activity; the processes of science include the formulation of scientifically.
Beyond Gazing, Pointing, and Reaching A Survey of Developmental Robotics Authors: Max Lungarella, Giorgio Metta.
“Low Level” Intelligence for “Low Level” Character Animation Damián Isla Bungie Studios Microsoft Corp. Bruce Blumberg Synthetic Characters MIT Media Lab.
Interactive Spaces Huantian Cao Department of Computer Science The University of Georgia.
Module 7: Understanding human growth and development
Mirroring, Empathy, and Group Processes
CS 415 – A.I. Slide Set 5. Chapter 3 Structures and Strategies for State Space Search – Predicate Calculus: provides a means of describing objects and.
Mindful Shift Chapter 12. “Of all species on earth, we human have the capacity of mind change: we change our minds and that of others”
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
New Bulgarian University MindRACES, First Review Meeting, Lund, 11/01/2006 Anticipation by Analogy An Attempt to Integrate Analogical Reasoning with Perception,
12/7/10 Looking Back, Moving Forward Computational Photography Derek Hoiem, University of Illinois Photo Credit Lee Cullivan.
1 1. Representing and Parameterizing Agent Behaviors Jan Allbeck and Norm Badler 연세대학교 컴퓨터과학과 로봇 공학 특강 학기 유 지 오.
Week 2-1: Human Information Processing
Human Activity Recognition at Mid and Near Range Ram Nevatia University of Southern California Based on work of several collaborators: F. Lv, P. Natarajan,
Chapter 16. Multiple Motivations for Imitation in Infancy in Imitation and Social Learning in Robots by Mark Nielsen and Virginia Slaughter JIHYUN LEE.
CS-378: Game Technology Lecture #15.5: Physically Based Simulation Prof. Okan Arikan University of Texas, Austin Thanks to James O’Brien, Steve Chenney,
Chapter 1. Imitation: Thoughts about Theories in Imitation and Social Learning in Robots, Humans and Animals, Nehaniv and Dautenhaln. Course: Robots.
Chapter 8. Learning of Gestures by Imitation in a Humanoid Robot in Imitation and Social Learning in Robots, Calinon and Billard. Course: Robots Learning.
Conclusions This method for the production of animacy displays is viable. Displays filmed from the TopView are more often rated as being self- propelled.
Introduction to Psychology Sensation and Perception Prof. Jan Lauwereyns
How conscious experience and working memory interact Bernard J. Baars and Stan Franklin Soft Computing Laboratory 김 희 택 TRENDS in Cognitive Sciences vol.
Ergonomics/Human Integrated Systems (Project 02)
Understanding Naturally Conveyed Explanations of Device Behavior Michael Oltmans and Randall Davis MIT Artificial Intelligence Lab.
High-Level Vision Object Recognition.
Constrained Synthesis of Textural Motion for Animation Shmuel Moradoff Dani Lischinski The Hebrew University of Jerusalem.
SOCIAL LEARNING THEORY (SLT) (Observational Learning)
Humanoid-Human Interaction Presented by KMR ANIK 1.
RoboBrain: A Software Architecture Mapping the Human Brain Ilias Trochidis Ortelio Ltd, UK Adamantios Koumpis University of Passau, Germany Laurentiu Vasiliu.
Simulation of Characters in Entertainment Virtual Reality.
Network Management Lecture 13. MACHINE LEARNING TECHNIQUES 2 Dr. Atiq Ahmed Université de Balouchistan.
A Bayesian Model of Imitation in Infants and Robots
Emotional Intelligence
Application to Animating a Digital Actor on Flat Terrain
Cognitive Development in Infants
High-Level Vision Object Recognition II.
Presentation transcript:

Imitation and Social Intelligence for Synthetic Characters Daphna Buchsbaum, MIT Media Lab and Icosystem Corporation Bruce Blumberg, MIT Media Lab

Socially Intelligent Characters and Robots Able to learn by observing and interacting with humans, and each other Able to interpret other’s actions, intentions and motivations - characters with Theory of Mind Prerequisite for cooperative behavior

Max and Morris

Max watches Morris using synthetic vision Can recognize and imitate Morris’s movements, by comparing them to his own movements (using his own movements as the model/example set) Uses movement recognition to bootstrap identifying simple motivations and goals and learning about new objects in the environment

Infant Imitation These interactions may help infants learn relationships between self and other ‘like me’ experiences Simulation Theory ©

“To know a man is to walk a mile in his shoes” Understanding others using our own perceptual, behavioral and motor mechanisms We want to create a Simulation Theory-based social learning system for synthetic characters ©

Motor Representation: The Posegraph Nodes are poses Edges are allowable transitions A motor program generates a path through a graph of annotated poses Paths can be compared and classified Nodes are poses Edges are allowable transitions A motor program generates a path through a graph of annotated poses Paths can be compared and classified Related Work: Downie 2001 Masters Thesis; Arikan and Forsyth, SIGGRAPH 2002; Lee et. al., SIGGRAPH 2002

Motor Representation: The Posegraph Multi-resolution graphs Nodes are movements Blending variants of ‘same’ motion Multi-resolution graphs Nodes are movements Blending variants of ‘same’ motion

Synthetic Vision Graphical camera captures Max’s viewpoint Enforces sensory honesty (occlusion)

Synthetic Vision Key body parts are color-coded Max locates them, and remembers their position relative to Morris’s root node. People watching a movement attend to end-effector locations Root node

Parsing Motion Many different movements start and end in the same transitionary poses (Gleicher et. al., 2003) These poses can be used as segment markers Related Work: Bindiganavale and Badler, CAPTECH 1998; Fod, Mataric and Jenkins, Autonomous Robots 2002; Lieberman, Masters Thesis 2004; ©

Movement Recognition

Identify the best matching path through the posegraph Check if this path closely matches an already existing movement

Differing Movement Graphs

From Perception to Production Max sees Morris move through a series of poses Max identifies which movement(s) in his graph are closest to what he perceived Max plays out his best-matching known movement

Identifying Actions, Motivations and Goals

Action Identification

Top-level motivation systems Object Action Do-until Trigger

Representation of Action: Action Tuple Object Action Do-until Trigger Context in which the action can be performed Optional object to perform action on Anything from setting an internal variable to making a motor request. Context in which action is completed

Action l Identification “Should I” trigger “can I” trigger

Action Identification Find bottom-level actions that use matched movements

Action Identification Find bottom-level actions that use matched movements

Action Identification Find all paths through The action hierarchy To the matching action

Action Identification Check “can-I” triggers, see which actions are possible.

Action Identification Check “can-I” triggers, see which actions are possible.

Action Identification Check “can-I” triggers, see which actions are possible.

Simulation Theory Use “should-I” triggers in ancestors of matching actions to identify simple motivations (e.g. hunger) Use “success” Do-untils to identify simple intentions (e.g. get the object, eat) Use “success” Do-untils to start identifying failed and successful actions.

Learning About Objects

? ? ??

?

Contributions: What Max can Do Parse a continuous stream of motion into individual movement units Classify observed movements as one of his own Identify observed actions, using his own action system Identify simple motivations and goals for an action Learn uses of objects through observation

Future Work: What Max Can’t Currently Do Solve the correspondence problem Imitate characters with non-identical morphology Doesn’t act on knowledge of partner’s goals - cooperative activity Currently ignores novel movements

Harder Problems How do you use your knowledge? –Limits of simulation theory –Intentions vs consequences: The problem of the robot that eats for you –What level of granularity do you attend to: wanting the object vs wanting to eat

Acknowledgements Members of the Synthetic Characters and Robotic Life Groups at the MIT Media Lab Advisor : –Bruce Blumberg, MIT Media Lab Thesis Readers: –Cynthia Breazeal, MIT Media Lab –Andrew Meltzoff, University of Washington Special Thanks To: –Jesse Gray –Marc Downie