Spatial Reasoning with Guinness References Acknowledgements University of Missouri, Columbia, MO.

Slides:



Advertisements
Similar presentations
Autonomous Intelligent Mobile Robotics Jerry Weinberg Associate Professor Ross Mead Robot Scientist Computer Science What is a Robot?
Advertisements

Breakout session B questions. Research directions/areas Multi-modal perception cognition and interaction Learning, adaptation and imitation Design and.
Team RFEyes Presents:. Project SmartCart Sponsored By Dr. Andrew Szeto & The National Science Foundation Tatsani Inkhamfong Andres Flores Alain Iamburg.
Irek Defée Signal Processing for Multimodal Web Irek Defée Department of Signal Processing Tampere University of Technology W3C Web Technology Day.
Game-Based Design of Human-Robot Interfaces Notes on USR design papers ( ) Presented by Misha Kostandov.
Computational Intelligence Research in ECE Target detection and Recognition Linguistic Scene Description Sketch Understanding What’s Coming? Jim Keller,
Deya Banisakher Megan Biondo. Research group (Summer 2012) Faculty Mentor: Prof. Marjorie Skubic Graduate Mentor: Ms. Tatiana Alexenko Undergraduate Student.
ROBOT BEHAVIOUR CONTROL SUCCESSFUL TRIAL OF MARKERLESS MOTION CAPTURE TECHNOLOGY Student E.E. Shelomentsev Group 8Е00 Scientific supervisor Т.V. Alexandrova.
Linguistic Spatial Reasoning Jim Keller Electrical and Computer Engineering Department University of Missouri-Columbia I get by with a lot of help from.
Dialogue – Driven Intranet Search Suma Adindla School of Computer Science & Electronic Engineering 8th LANGUAGE & COMPUTATION DAY 2009.
Robot Navigation based on the Mapping of Coarse Qualitative Route Descriptions to Route Graphs Motivation Outline Global World Knowledge Local Path Planning.
HIGGINS Error handling strategies in a spoken dialogue system Rolf Carlson, Jens Edlund and Gabriel Skantze Error handling research issues The long term.
Heterogeneous Multi-Robot Dialogues for Search Tasks Thomas K Harris, Satanjeev (Bano) Banerjee Alexander Rudnicky AAAI Spring Symposium 2005: Dialogical.
A Sketch Interface for Mobile Robots
A Robotic Wheelchair for Crowded Public Environments Choi Jung-Yi EE887 Special Topics in Robotics Paper Review E. Prassler, J. Scholz, and.
The MU Mites Robot Team Marjorie Skubic Derek Anderson Srikanth Kavirayani Mohammed Khalilia Benjamin Shelton Computational Intelligence Research Lab University.
Spatial Reasoning for Semi-Autonomous Vehicles Using Image and Range Data Marjorie Skubic and James Keller Students: Sam Blisard, George Chronis, Grant.
Wheelesley : A Robotic Wheelchair System: Indoor Navigation and User Interface Holly A. Yanco Woo Hyun Soo DESC Lab.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
CS 326 A: Motion Planning Navigation Through Virtual Environments.
CS 326 A: Motion Planning Navigation Through Virtual Environments.
Autonomous Mobile Robots CPE 470/670 Lecture 8 Instructor: Monica Nicolescu.
Robotics Industry Posts Second Best Year Ever North American robotics industry posted its second best year ever in 2000 [Robotic Industries Association.
DO NOT FEED THE ROBOT. The Autonomous Interactive Multimedia Droid (GuideBot) Bradley University Department of Electrical and Computer Engineering EE-452.
A Versatile and Safe Mobility Assistant * Kim, Min-Jung 2001/6/12 Special Topics in Robotics Design and Control of Devices for Human-Movement Assistance.
Vanderbilt University University of Missouri-Columbia A Biologically Inspired Adaptive Working Memory for Robots Marjorie Skubic and James M. Keller University.
A Navigation System for Increasing the Autonomy and the Security of Powered Wheelchairs S. Fioretti, T. Leo, and S.Longhi yhseo, AIMM lab.
Patent Liability Analysis Andrew Loveless. Potential Patent Infringement Autonomous obstacle avoidance 7,587,260 – Autonomous navigation system and method.
An Integral System for Assisted Mobility Manuel Mazo & the Research group of the SIAMO Project Yuchi Ming, IC LAB.
Semantic Parsing for Robot Commands Justin Driemeyer Jeremy Hoffman.
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
CIS 601 Fall 2004 Introduction to Computer Vision and Intelligent Systems Longin Jan Latecki Parts are based on lectures of Rolf Lakaemper and David Young.
Human-Robot Interaction -Emerging Opportunities Pramila Rani 1997A3PS071 October 27,2006.
Program ultrasonic range sensor in autonomous mode
Joint International Master Project Dennis Böck & Dirk C. Aumueller 1.
Activity 3: Multimodality HMI for Hands-free control of an intelligent wheelchair L. Wei, T. Theodovidis, H. Hu, D. Gu University of Essex 27 January 2012.
Rijo Santhosh Dr. Mircea Agapie The topic of machine learning is at the forefront of Artificial Intelligence.
Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.
Mobile Robot Navigation Using Fuzzy logic Controller
1 Why? Who? What? Jon Oberlander Director of SICSA.
卓越發展延續計畫分項三 User-Centric Interactive Media ~ 主 持 人 : 傅立成 共同主持人 : 李琳山,歐陽明,洪一平, 陳祝嵩 水美溫泉會館研討會
Models for Human Interaction with Mobile Service Robots Helge Hütttenrauch Helge Hüttenrauch
Major Disciplines in Computer Science Ken Nguyen Department of Information Technology Clayton State University.
Model of the Human  Name Stan  Emotion Happy  Command Watch me  Face Location (x,y,z) = (122, 34, 205)  Hand Locations (x,y,z) = (85, -10, 175) (x,y,z)
 Motivated by desire for natural human-robot interaction  Encapsulates what the robot knows about the human  Identity  Location  Intentions Human.
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
Intelligent Robot Architecture (1-3)  Background of research  Research objectives  By recognizing and analyzing user’s utterances and actions, an intelligent.
Mixed Reality: A Model of Mixed Interaction Céline Coutrix, Laurence Nigay User Interface Engineering Team CLIPS-IMAG Laboratory, University of Grenoble.
Fuzzy Disjunctive Inference from the Perspective of a Dweeb Robert J. Marks II.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Institute of Automation Christian Mandel Thorsten Lüth Tim Laue Thomas Röfer Axel Gräser Bernd Krieg-Brückner.
Presented by:Supervisors: Fuad Amira Dr. Raed Qadi Reem SalousDr. Samer Arandi MY PATH KEEPER.
Autonomous Vehicle Instructor Dr. Dongchul Kim Presented By Harish Kumar Gudipati.
DO NOT FEED THE ROBOT. The Autonomous Interactive Multimedia Droid (GuideBot) Bradley University Department of Electrical and Computer Engineering EE-452.
Prime Mobility Group Group Members: Fredrick Baggett William Crick Sean Maxon Advisor: Dr. Elliot Moore.
Ghislain Fouodji Tasse Supervisor: Dr. Karen Bradshaw Computer Science Department Rhodes University 24 March 2009.
The “Spatial Turing Test” Stephan Winter, Yunhui Wu
System Model: Physical Module Constraints: 1)A module must have a minimum of two adjacent free sides in order to move. 2) A module must have an adjacent.
Understanding Naturally Conveyed Explanations of Device Behavior Michael Oltmans and Randall Davis MIT Artificial Intelligence Lab.
Tweaks Through Time One of the Major tweaks that had to be done to the initial design was the way the robot would find the main door. Initially it will.
5. Methodology Compare the performance of XCS with an implementation of C4.5, a decision tree algorithm called J48, on the reminder generation task. Exemplar.
How I Spent My Summer Vacation Grace D. Robot. The AAAI Robot Challenge  What  Conference attendee  Graduate Robot Attending ConferencE (GRACE)  Why.
TOUCHLESS TOUCH SCREEN USER INTERFACE
HUMAN MEDIA INTERACTION CREATIVE TECHNOLOGY FOR PLAY AND
Programming Design ROBOTC Software Computer Integrated Manufacturing
Multiple Robot navigation and Mapping for Combat environment
By Pradeep C.Venkat Srinath Srinivasan
Patent Liability Analysis
EK-Lyon OBJECTIVE METHODS PARTNERS Human-Robot Interaction
Social Practice of the language: Describe and share information
Presentation transcript:

Spatial Reasoning with Guinness References Acknowledgements University of Missouri, Columbia, MO

THE STARTING ARCHITECTURE mapserver SRserverpose continuous localization speech PDA(NRL) robot palmhelper sensor data corrections oldest short term map robot pose user commands and responses encoders sensor info speech commands trulla vfh short term map long term map robot cmds palmserver GUI(EUT) gesture

THE CURRENT ARCHITECTURE imageserver mapserver SRserver robot_spatial pose continuous localization speech PDA(MU) robot Cortex sensor data robot cmds corrections oldest short term map user commands and responses robot pose user commands and responses encoders sensor info speech commands trulla vfh short term map long term map robot cmds sketch log files sketch log files GUI(EUT) gesture spatialview

THE PLANNED ARCHITECTURE imageserver mapserver SRserver spatial behaviors obstacle avoidance pose continuous localization speech robot Cortex sensor data robot cmds corrections oldest short term map user commands and responses SR & map info robot pose user commands and responses encoders sensor info query & label speech commands sketch directives & feedback robot commands trulla vfh short term map long term map GUI(EUT) gesture PDA(MU)

User:How many objects do you see? Robot: I am sensing four objects. User: Object 2 is a table. User: Describe the scene. Robot: There are objects on my front right. The object number 4 is mostly in front of me. The table is behind me. User: Go behind the table. Behind the table SRserver

between object 1 and object 2 using the midpoint between closest points using the midpoint between centroids using the CFMD

Image Server

PATH DESCRIPTION GENERATED FROM THE SKETCHED ROUTE MAP 1. When table is mostly on the right and door is mostly to the rear (and close) Then Move forward 2. When chair is in front or mostly in front Then Turn right 3. When table is mostly on the right and chair is to the left rear Then Move forward 4. When cabinet is mostly in front Then Turn left 5. When ATM is in front or mostly in front Then Move forward 6. When cabinet is mostly to the rear and tree is mostly on the left and ATM is mostly in front Then Stop Understanding Sketched Route Maps

[1] M. Skubic, P. Matsakis, G. Chronis and J. Keller, "Generating Multi- Level Linguistic Spatial Descriptions from Range Sensor Readings Using the Histogram of Forces", Autonomous Robots, Vol. 14, No. 1, Jan., 2003, pp [2] M. Skubic, D. Perzanowski, S. Blisard, A. Schultz, W. Adams, M. Bugajska and D. Brock “Spatial Language for Human-Robot Dialogs,” IEEE Transactions on SMC, Part C, to appear in the special issue on Human-Robot Interaction. [3] M. Skubic, S. Blisard, C. Bailey, J.A. Adams and P. Matsakis, "Qualitative Analysis of Sketched Route Maps: Translating a Sketch into Linguistic Descriptions," IEEE Transactions on SMC Part B, to appear. [4] G. Chronis and M. Skubic, “Sketch-Based Navigation for Mobile Robots,” In Proc. of the IEEE 2003 Intl. Conf. on Fuzzy Systems, May, 2003, St. Louis, MO. [5] G. Scott, J.M. Keller, M. Skubic and R.H. Luke III, “Face Recognition for Homeland Security: A Computational Intelligence Approach,” In Proc. of the IEEE 2003 Intl. Conf. on Fuzzy Systems, May, 2003, St. Louis, MO. References

From left to right George Chronis, Grant Scott, Dr. Marge Skubic, Matt Williams, Craig Bailey, Bob Luke, Charlie Huggard and Sam Blisard Missing: Dr. Jim Keller Guinness and Gang

Sketch-Based Navigation The sketched route map The robot traversing the sketched route

Sketch-Based Navigation The digitized sketched route map The robot traversing the sketched route

Sketch-Based Navigation The digitized sketched route map The robot traversing the sketched route

This work has been supported by ONR and the U.S. Naval Research Lab. Natural language understanding is accomplished using a system developed by NRL, called Nautilus [Wauchope, 2000]. We also want to acknowledge the help of Dr. Pascal Matsakis. Acknowledgements

NRL’s Multimodal Robot Interface