Human-Robot Communication: Telling, Asking and Teaching Peter Ford Dominey CNRS
Telling Robots what to do Spoken language control of different actions in the robot’s behavioral repertoire.
Learning Perceptual Categories: Introduction Identification of new user Tutorial Teaching new relations Recognizing relations Handling Unknown relations
Learning 1 Spatial attention directed to objects that have been moved Ensemble of primitive relations (horizontal and vertical) extracted Global form characterized User invited to name the demonstrated relation
Learning 2 Spatial attention directed to objects that have been moved Ensemble of primitive relations (horizontal and vertial) extracted Global form characterized User invited to name the demonstrated relation
Handling Unknown Relations If the user asks the system to identifiy an unknown relation The system invites the user to name it for future reference
Recognition 1 Known relations can then be applied to new configurations that were not used in training.
Recognition 2 Known relations can then be applied to new configurations that were not used in training.
Learning Action Commands Introduction and New commands Explain system to new user Invite user to link a behavior with a command or button press
Modifying Existing Commands Choose a learned action in one modality (« Stand ») Associate it with a command from a different modality (head button press)
Using Learned Commands Choose a learned action in either modality
Redefining Learned Actions Choose a learned action in either modality
Interrogating the Aibo Introduction Tutorial
Interrogating the Aibo Physical State Battery charge Battery temperature
Interrogating the Aibo External Sensors Recogntion of experienced user Streamlined interface Object vision