 Motivated by desire for natural human-robot interaction  Encapsulates what the robot knows about the human  Identity  Location  Intentions Human.

Slides:



Advertisements
Similar presentations
Breakout session B questions. Research directions/areas Multi-modal perception cognition and interaction Learning, adaptation and imitation Design and.
Advertisements

Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
University of Minho School of Engineering Centre ALGORITMI Uma Escola a Reinventar o Futuro – Semana da Escola de Engenharia - 24 a 27 de Outubro de 2011.
ROBOT HUNTER Group C Boddu, Rajani Cole,Craig Gaverneni,Sai Joshi, Preeti Rao, Vikranth MAE 476/576 Mechatronics.
Human-Robot Communication: Telling, Asking and Teaching Peter Ford Dominey CNRS.
Jennifer Goodall, Nick Webb, Katy DeCorah
Recent Developments in Human Motion Analysis
Humanoid Robotics – A Social Interaction CS 575 ::: Spring 2007 Guided By Prof. Baparao By Devangi Patel reprogrammable multifunctionalmovable self - contained.
Overall UI Design Architecture OutlookCalendar Radar Agents and Learning Modules Wrappers for notification and commands Eudora Dialog Manager Dialog UI.
Usability Test by Knowing User’s Every Move - Bharat chaitanya.
Distributed Robot Agent Brent Dingle Marco A. Morales.
BLUE EYES is a technology, which aims at creating computational machines that have perceptual and sensory abilities like those of human beings. The basic.
USC Viterbi School of Engineering. Alternative CC Robotic Systems.
Computer Vision. Computer vision is concerned with the theory and technology for building artificial Computer vision is concerned with the theory and.
Tag Bot: A Robotic Game of Tag Jonathan Rupe Wai Yip Leung.
Tennessee State University College of Engineering ENGINEERING RESEARCH INSTITUTE (ERI) Interdisciplinary Research in Robotics Intelligent Tactical Mobility.
Humanoids Robotics © 2015 albert-learning.com HUMANOIDS ROBOTICS.
INTERACT : M OTION S ENSOR D RIVEN G ESTURE R ECOGNITION C LOUD S ERVICE School of Electronic & Computer Engineering Technical University of Crete, Greece.
EDGE AVOIDER ROBOT USING I-BOT mini V3. EDGE AVOIDER USING I-BOT mini V3 Edge Avoider Robot is built using the IR based Line Detecting Module. The modules.
Karthiknathan Srinivasan Sanchit Aggarwal
Robotica Lezione 1. Robotica - Lecture 12 Objectives - I General aspects of robotics –Situated Agents –Autonomous Vehicles –Dynamical Agents Implementing.
A Brief Overview of Computer Vision Jinxiang Chai.
Brainstorming 3 Juuso Kinnunen Ville Rahikka. Current state + stability, energy - slow, limited work area.
ISAT 303 Mod 1-1  M. Zarrugh Module I Sensors and Measurements in MFG  The objectives of this module are to –understand the role which sensors.
Multimedia Specification Design and Production 2013 / Semester 2 / week 8 Lecturer: Dr. Nikos Gazepidis
Dynamic Coverage Enhancement for Object Tracking in Hybrid Sensor Networks Computer Science and Information Engineering Department Fu-Jen Catholic University.
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
Recognition of meeting actions using information obtained from different modalities Natasa Jovanovic TKI University of Twente.
Fall UI Design and Implementation1 Lecture 20: HCI Research Topics.
Multi-Modal Dialogue in Human-Robot Conversation Information States in a Multi- modal Dialogue System for Human-Robot Conversation (Lemon et al. 2001)
A ROBOTHUMAN Software System IMA Primitive Agent Hardware Interface A A A A A A A Human Agent Robot Agent System Architecture for Human- Robot Interaction.
Cloud Futures 2011 Christopher Alme, Christopher Nunu Dennis Qian, Stanley Roberts Stephen Wong.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
Incremental learning for Robust Visual Tracking Ko Dae-Won.
Enabling User Interactions with Video Contents Khalad Hasan, Yang Wang, Wing Kwong and Pourang Irani.
THE DISABILITY EXPERIENCE CONFERENCE ROBOTS TO MOTIVATE YOUNGHYUN CHUNG.
“Show me what you meant”: Mode-switching prompts in a multi-modal dialog system with distractions Thomas Harris & Hua Ai October 25, 2005.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Issues in Multiparty Dialogues Ronak Patel. Current Trend  Only two-party case (a person and a Dialog system  Multi party (more than two persons Ex.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
REU 2004 Computer Science and Engineering Department The University of Texas at Arlington Research Experiences for Undergraduates in Distributed Rational.
Teleoperation In Mixed Initiative Systems. What is teleoperation? Remote operation of robots by humans Can be very difficult for human operator Possible.
Abstract A Structured Approach for Modular Design: A Plug and Play Middleware for Sensory Modules, Actuation Platforms, Task Descriptions and Implementations.
SOFTWARE DESIGN AND ARCHITECTURE LECTURE 31. Review Creational Design Patterns – Singleton Pattern – Builder Pattern.
TEI of Automation of Pireaus 2005J.YPATIDIS1 HEXAPOD ROBOT.
Venus Project Brief Description. What It Do What Monitor Log Analyze Block Narrow Report Search Where Single stations Internet Gates Special Devices Web.
A Framework with Behavior-Based Identification and PnP Supporting Architecture for Task Cooperation of Networked Mobile Robots Joo-Hyung Kiml, Yong-Guk.
REU 2007 Computer Science and Engineering Department The University of Texas at Arlington Research Experiences for Undergraduates in Information Processing.
Jennifer Lee Final Automated Detection of Human Emotion.
1 Transducer Electronic Data Sheets (TEDS) Lee H. Eccles Boeing Commercial Airplanes P. O. Box 3707, M/C 14-ME Seattle, WA
Dan Bohus Researcher Microsoft Research in collaboration with: Eric Horvitz, ASI Zicheng Liu, CCS Cha Zhang, CCS George Chrysanthakopoulos, Robotics Tim.
Computer Science and Engineering Department The University of Texas at Arlington MavHome: An Intelligent Home Environment.
Working With WarKing. Overview Strengths and Weaknesses of the Scribbler Easiest, Hardest, Coolest Secret Weapons.
Microsoft Research Faculty Summit Dan Bohus, Eric Horvitz Microsoft Research.
9/30/ Cognitive Robotics1 Human-Robot Interaction Cognitive Robotics David S. Touretzky & Ethan Tira-Thompson Carnegie Mellon Spring 2009.
Nicole looks at faces Development of a visual robot interface to interpret facial expressions NICOLE: Future robots will share their environment with humans.
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Automated Detection of Human Emotion
Ubi-Glove: A WISP-based Multi-purpose Ubiquitous Computing Tool
Properties of Math (practice 1)
Mobile Agents.
Elecbits.
HOARD Robotics Horde of Autonomous Robotic Devices Project Overview
Natural User Interaction with Perceptual Computing
Behavior recognition - critical for successful HRI.
Face Detection Gender Recognition 1 1 (19) 1 (1)
Automated Detection of Human Emotion
 As the idea is to monitor and record operator’s basic physiological parameters, the most important physiological activity is the movement of eyes. 
Presentation transcript:

 Motivated by desire for natural human-robot interaction  Encapsulates what the robot knows about the human  Identity  Location  Intentions Human Agent

 Detection module: detects the human presence using multiple sensor modalities  IR motion sensor array  speech recognition  face detection  Monitoring module: keeps track of the detected human  Localization and tracking algorithms  Identification module: determines the identity of the human based on the current model  Identify human comparing to stored models  Detects changes in dynamic model Human Agent Modules

 Model of the current human: description of the current human  Human activity: description of what the user is doing  User’s request: the nature of the interaction, the task the user request of the robot Human Agent Internal Model

Model of the Human  Name : Stan Identification module  Emotion: Sad Monitoring module  Command: Play with me Monitoring module  Face Location: (x,y,z) = (122, 34, 205) Detection & Monitoring modules  Hand Locations: (x,y,z) = (85, -10, 175) (x,y,z) = (175, 56, 186) Detection & Monitoring modules (x, y,z) Stan