Spatial Reasoning for Semi-Autonomous Vehicles Using Image and Range Data Marjorie Skubic and James Keller Students: Sam Blisard, George Chronis, Grant Scott, Bob Luke, Craig Bailey, Matt Wiliams, Charlie Huggard Electrical & Computer Engineering Dept. University of Missouri-Columbia
Cognitive models indicate that people use spatial relationships in navigation and other spatial reasoning (Previc, Schunn) “There is a desk in front of me and a doorway behind it.” “Go around the desk and through the doorway.” More natural interaction with robotic vehicles Motivation Spatial Language for Human-Robot Dialog
Spatial language can be used to: Focus attention –“look to the left of the telephone” Issue commands –“pick up the book on top of the desk” Describe a high level representation of a task –“go behind the counter, find my coffee mug on the table, and bring it back to me” Receive feedback from the robot describing the environment –“there is a book on top of the desk to the right of the coffee mug”
Our Spatial Modeling Tool Capturing qualitative spatial information between 2 objects The histogram of constant forces The histogram of gravitational forces Features extracted from the histograms are used to generate linguistic spatial terminology Matsakis et al. 1999, 2001
Our Spatial Modeling Tool v A B Histogram of forces Matsakis et al., 1999 Capturing qualitative spatial information between 2 objects
gravitational constant - - A system of 27 fuzzy rules and meta-rules allows meaningful linguistic descriptions to be produced Each histogram gives its opinion about the relative position between the objects that are considered. 2. The two opinions are combined. Four numeric and two symbolic features result from this combination. Linguistic Scene Description
The tower is perfectly to the LEFT of the stack buildings. Linguistic Scene Description Example
The group of storehouses is loosely ABOVE-LEFT of the stack buildings. Linguistic Scene Description Example
The storehouse is ABOVE the stack buildings, but shifted to the RIGHT. Linguistic Scene Description Example
HRI Scenario: Scene 1 Human: “How many objects do you see?” Robot:“I am sensing 4 objects.” Human:“Where are they?” Robot:“There are objects in front of me. I am surrounded from the rear. The object # 4 is to the right of me.” Human:“Where is the nearest object in front of you?” Robot:“The object # 3 is in front of me but extends to the right relative to me. The object is very close.”
HRI Scenario: Scene 1, cont. Human: “Object #3 is a box.” Robot:“I now know object#3 is a box.” Human:“Go to the right of the box.”
HRI Scenario: Scene 2 The robot has moved to the right of the box. The pillar has now come into view (object #4)
An alternate representation for the horizontal plane, stored in a robot- centric frame SURROUNDED on the front. An object is BEHIND, FAR.
Combining range and image data: Combine the horizontal and vertical planes Object #3 is actually “Bob.” Bob is recognized and labeled by the MSNN.
Object Recognition MSNN [Won & Gader, 1997] Post-Processing Linguistic Response: I found Wiley at 4 degrees, Bob at 1 degree Confidence Field Visual Verification Output Images Input Scene
Object recognition using the MSNN
Sketch Interface PATH DESCRIPTION GENERATED FROM THE SKETCHED ROUTE MAP 1. When table is mostly on the right and door is mostly to the rear (and close) Then Move forward 2. When chair is in front or mostly in front Then Turn right 3. When table is mostly on the right and chair is to the left rear Then Move forward 4. When cabinet is mostly in front Then Turn left 5. When ATM is in front or mostly in front Then Move forward 6. When cabinet is mostly to the rear and tree is mostly on the left and ATM is mostly in front Then Stop
Sketch-Based Navigation The robot traversing the sketched route The sketched route map
For more information Funded by ONR and the Naval Research Lab