Spatial Reasoning for Semi-Autonomous Vehicles Using Image and Range Data Marjorie Skubic and James Keller Students: Sam Blisard, George Chronis, Grant.

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

Multiple Intelligences
PREPOSITIONS OF PLACE WHERE? IN BEHIND ON BETWEEN UNDER NEXT TO
on in under behind in front of
Breakout session B questions. Research directions/areas Multi-modal perception cognition and interaction Learning, adaptation and imitation Design and.
Orthographic Projection – Multi-View Drawing
Computational Intelligence Research in ECE Target detection and Recognition Linguistic Scene Description Sketch Understanding What’s Coming? Jim Keller,
University of Amsterdam Search, Navigate, and Actuate - Quantitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Quantative Navigation.
Outline Introduction Music Information Retrieval Classification Process Steps Pitch Histograms Multiple Pitch Detection Algorithm Musical Genre Classification.
Spatial Reasoning with Guinness References Acknowledgements University of Missouri, Columbia, MO.
Chapter 4.
Linguistic Spatial Reasoning Jim Keller Electrical and Computer Engineering Department University of Missouri-Columbia I get by with a lot of help from.
Robot Navigation based on the Mapping of Coarse Qualitative Route Descriptions to Route Graphs Motivation Outline Global World Knowledge Local Path Planning.
I1-[OntoSpace] Ontologies for Spatial Communication John Bateman, Kerstin Fischer, Reinhard Moratz Scott Farrar, Thora Tenbrink.
A Sketch Interface for Mobile Robots
The MU Mites Robot Team Marjorie Skubic Derek Anderson Srikanth Kavirayani Mohammed Khalilia Benjamin Shelton Computational Intelligence Research Lab University.
From the NXT top menu Connect desired hardware as indicated Enter a command in the box indicated from the menu provided Repeat for all 5 boxes.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation.
Object Recognition Scenario? Landmark Detection (objects and humans) –Cluttered Environment –Levels of Occlusion –Types Color Shape Texture –Dynamic confusers.
Chapter 4.
Vanderbilt University University of Missouri-Columbia A Biologically Inspired Adaptive Working Memory for Robots Marjorie Skubic and James M. Keller University.
Behaviour Based approaches to robotics Jeremy Wyatt School of Computer Science University of Birmingham.
Visualization and Graphics Introduction
1 USC Map Navigation System Team: 42 Rohit Philip Eddy4096Campus Vijay GopalakrishnanCampus Vadim S. Klochko8956DEN.
Data Mining Solutions (Westphal & Blaxton, 1998) Dr. K. Palaniappan Dept. of Computer Engineering & Computer Science, UMC.
/09/dji-phantom-crashes-into- canadian-lake/
SPIE'01CIRL-JHU1 Dynamic Composition of Tracking Primitives for Interactive Vision-Guided Navigation D. Burschka and G. Hager Computational Interaction.
1 Another Look at Camera Control Karan Singh †, Cindy Grimm, Nisha Sudarsanan Media and Machines Lab Department of Computer Science and Engineering Washington.
Programming with Alice Computing Institute for K-12 Teachers Summer 2011 Workshop.
Activity 3: Multimodality HMI for Hands-free control of an intelligent wheelchair L. Wei, T. Theodovidis, H. Hu, D. Gu University of Essex 27 January 2012.
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
Sungwoo, Lim CAD centre Sungwoo, Lim CAD centre Supervisor: Dr. Alex H B Duffy Dr. B S Lee Dr. B S Lee Supervisor: Dr. Alex H B Duffy Dr. B S Lee Dr. B.
Nov , 2006 The 2nd Korea-Sweden Workshop on Intelligent Systems for Societal Challenges of the 21st Century Pine Hall, 23F, Hotel The Silla Seoul,
Describing Images using Inferred Visual Dependency Representations Authors : Desmond Elliot & Arjen P. de Vries Presentation of Paper by : Jantre Sanket.
IEEE Int'l Symposium on Signal Processing and its Applications 1 An Unsupervised Learning Approach to Content-Based Image Retrieval Yixin Chen & James.
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Towards a Pattern Language for User Interface Design
Automated Critique of Sketched Mechanisms Jon Wetzel and Ken Forbus Qualitative Reasoning Group Northwestern University.
Using RouteGraphs as an Appropriate Data Structure for Navigational Tasks SFB/IQN-Kolloquium Christian Mandel, A1-[RoboMap] Overview Goal scenario.
Mixed Reality: A Model of Mixed Interaction Céline Coutrix, Laurence Nigay User Interface Engineering Team CLIPS-IMAG Laboratory, University of Grenoble.
Describing Images using Inferred Visual Dependency Representations Authors : Desmond Elliot & Arjen P. de Vries Presentation of Paper by : Jantre Sanket.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Graphic Organizers. Introduction Definition Effectiveness Resources.
Chapter 1. Cognitive Systems Introduction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Park, Sae-Rom Lee, Woo-Jin Statistical.
AN INTELLIGENT ASSISTANT FOR NAVIGATION OF VISUALLY IMPAIRED PEOPLE N.G. Bourbakis*# and D. Kavraki # #AIIS Inc., Vestal, NY, *WSU,
Chapter 9. PlayMate System (1/2) in Cognitive Systems, Henrik Iskov Chritensen et al. Course: Robots Learning from Humans Kwak, Hanock Biointelligence.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. 2 nd Part Course: Robots Learning from Humans Park, Seong-Beom Behavioral neurophysiology.
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
Open Source Robotics Vision and Mapping System Craig Schroeder June 1, 2005.
Chapter 8. Situated Dialogue Processing for Human-Robot Interaction in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans Sabaleuski.
Ghislain Fouodji Tasse Supervisor: Dr. Karen Bradshaw Computer Science Department Rhodes University 24 March 2009.
Extracting Simple Verb Frames from Images Toward Holistic Scene Understanding Prof. Daphne Koller Research Group Stanford University Geremy Heitz DARPA.
AGI-09 Scott Lathrop John Laird 1. 2  Cognitive Architectures Amodal, symbolic representations & computations No general reasoning with perceptual-based.
Understanding Naturally Conveyed Explanations of Device Behavior Michael Oltmans and Randall Davis MIT Artificial Intelligence Lab.
1 Chapter 13 Artificial Intelligence and Expert Systems.
Chapter 3 Section 1 Forces IPC.
5. Methodology Compare the performance of XCS with an implementation of C4.5, a decision tree algorithm called J48, on the reminder generation task. Exemplar.
Evolutionary Computing Systems Lab (ECSL), University of Nevada, Reno 1 Authors : Siming Liu, Christopher Ballinger, Sushil Louis
Relational Symbol Grounding through Affordance Learning: An Overview of the ReGround Project Laura Antanas1, Ozan Arkan Can2, Jesse Davis1, Luc De Raedt1,
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Recognizing Deformable Shapes
Part II Chapter 9: Topological Path Planning
Above and below the object level
Chapter 2- Visual Basic Schneider
Camera Composition Framing, Rule of thirds, Depth of field, Deep and Shallow focus, Focus puller.
Robo sapiens = robo prospectus
Orthographic Projection
Lesson 1.7 Represent Functions as Graphs
The Isometric Grid. Page 21. The Isometric Grid. Page 21.
Presentation transcript:

Spatial Reasoning for Semi-Autonomous Vehicles Using Image and Range Data Marjorie Skubic and James Keller Students: Sam Blisard, George Chronis, Grant Scott, Bob Luke, Craig Bailey, Matt Wiliams, Charlie Huggard Electrical & Computer Engineering Dept. University of Missouri-Columbia

Cognitive models indicate that people use spatial relationships in navigation and other spatial reasoning (Previc, Schunn) “There is a desk in front of me and a doorway behind it.” “Go around the desk and through the doorway.” More natural interaction with robotic vehicles Motivation Spatial Language for Human-Robot Dialog

Spatial language can be used to: Focus attention –“look to the left of the telephone” Issue commands –“pick up the book on top of the desk” Describe a high level representation of a task –“go behind the counter, find my coffee mug on the table, and bring it back to me” Receive feedback from the robot describing the environment –“there is a book on top of the desk to the right of the coffee mug”

Our Spatial Modeling Tool Capturing qualitative spatial information between 2 objects The histogram of constant forces The histogram of gravitational forces Features extracted from the histograms are used to generate linguistic spatial terminology Matsakis et al. 1999, 2001

Our Spatial Modeling Tool  v A B Histogram of forces Matsakis et al., 1999 Capturing qualitative spatial information between 2 objects

gravitational constant -  -    A system of 27 fuzzy rules and meta-rules allows meaningful linguistic descriptions to be produced Each histogram gives its opinion about the relative position between the objects that are considered. 2. The two opinions are combined. Four numeric and two symbolic features result from this combination. Linguistic Scene Description

The tower is perfectly to the LEFT of the stack buildings. Linguistic Scene Description Example

The group of storehouses is loosely ABOVE-LEFT of the stack buildings. Linguistic Scene Description Example

The storehouse is ABOVE the stack buildings, but shifted to the RIGHT. Linguistic Scene Description Example

HRI Scenario: Scene 1 Human: “How many objects do you see?” Robot:“I am sensing 4 objects.” Human:“Where are they?” Robot:“There are objects in front of me. I am surrounded from the rear. The object # 4 is to the right of me.” Human:“Where is the nearest object in front of you?” Robot:“The object # 3 is in front of me but extends to the right relative to me. The object is very close.”

HRI Scenario: Scene 1, cont. Human: “Object #3 is a box.” Robot:“I now know object#3 is a box.” Human:“Go to the right of the box.”

HRI Scenario: Scene 2 The robot has moved to the right of the box. The pillar has now come into view (object #4)

An alternate representation for the horizontal plane, stored in a robot- centric frame SURROUNDED on the front. An object is BEHIND, FAR.

Combining range and image data: Combine the horizontal and vertical planes Object #3 is actually “Bob.” Bob is recognized and labeled by the MSNN.

Object Recognition MSNN [Won & Gader, 1997] Post-Processing Linguistic Response: I found Wiley at 4 degrees, Bob at 1 degree Confidence Field Visual Verification Output Images Input Scene

Object recognition using the MSNN

Sketch Interface PATH DESCRIPTION GENERATED FROM THE SKETCHED ROUTE MAP 1. When table is mostly on the right and door is mostly to the rear (and close) Then Move forward 2. When chair is in front or mostly in front Then Turn right 3. When table is mostly on the right and chair is to the left rear Then Move forward 4. When cabinet is mostly in front Then Turn left 5. When ATM is in front or mostly in front Then Move forward 6. When cabinet is mostly to the rear and tree is mostly on the left and ATM is mostly in front Then Stop

Sketch-Based Navigation The robot traversing the sketched route The sketched route map

For more information Funded by ONR and the Naval Research Lab