1 Panoramic University of Amsterdam Informatics Institute.

Slides:



Advertisements
Similar presentations
Gestures Recognition. Image acquisition Image acquisition at BBC R&D studios in London using eight different viewpoints. Sequence frame-by-frame segmentation.
Advertisements

Mobile Robot ApplicationsMobile Robot Applications Textbook: –T. Bräunl Embedded Robotics, Springer 2003 Recommended Reading: 1. J. Jones, A. Flynn: Mobile.
Discussion topics SLAM overview Range and Odometry data Landmarks
Interactive Video Tours MSR Interactive Visual Media Group //msrweb/vision/IBR Rick Szeliski, Sing Bing Kang, Matt Uyttendaele, Simon Winder, Antonio Criminisi.
MICHAEL MILFORD, DAVID PRASSER, AND GORDON WYETH FOLAMI ALAMUDUN GRADUATE STUDENT COMPUTER SCIENCE & ENGINEERING TEXAS A&M UNIVERSITY RatSLAM on the Edge:
Robocup ve USARSIM Dr. Muhammet Balcılar. What is RoboCup? an international research and education initiative an attempt to foster AI and intelligent.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Patch to the Future: Unsupervised Visual Prediction
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Simultaneous Localization & Mapping - SLAM
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Attention, Awareness, and the Computational Theory of Surprise Research Qualifying Exam August 30 th, 2006.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
CPSC 425: Computer Vision (Jan-April 2007) David Lowe Prerequisites: 4 th year ability in CPSC Math 200 (Calculus III) Math 221 (Matrix Algebra: linear.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
A New Omnidirectional Vision Sensor for the Spatial Semantic Hierarchy E. Menegatti, M. Wright, E. Pagello Dep. of Electronics and Informatics University.
AI Lab Weekly Seminar By: Buluç Çelik.
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Autonomous Wheelchair Andreea Bancila ‘13 Professor Audrey Lee St. John Dr. Dan Barry Dr. William Kennedy Andreea Bancila ‘13 Professor Audrey Lee St.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
Vision for mobile robot navigation Jannes Eindhoven
Panorama Stitching and Augmented Reality. Local feature matching with large datasets n Examples: l Identify all panoramas and objects in an image set.
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
Haptic: Image: Audio: Text: Landmark: YesNo YesNo YesNo YesNo YesNo Haptic technology, or haptics, is a tactile feedback technology that takes advantage.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
Sonar-Based Real-World Mapping and Navigation by ALBERTO ELFES Presenter Uday Rajanna.
Mobile Robot ApplicationsMobile Robot Applications Textbook: –T. Bräunl Embedded Robotics, Springer 2003 Recommended Reading: 1. J. Jones, A. Flynn: Mobile.
Vision Guided Robotics
How can robots help people and make the world a better place?
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
Biologically Inspired Turn Control for Autonomous Mobile Robots Xavier Perez-Sala, Cecilio Angulo, Sergio Escalera.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots Xavier Pérez *, Cecilio Angulo *, Sergio Escalera + and Diego Pardo *
Activity 1: Multi-sensor based Navigation of Intelligent Wheelchairs Theo Theodoridis and Huosheng Hu University of Essex 27 January 2012 Ecole Centrale.
Robot Compagnion Localization at home and in the office Arnoud Visser, Jürgen Sturm, Frans Groen University of Amsterdam Informatics Institute.
Infrastructure-less indoor location guidance.  Emergency Response – Fire ◦ Unknown environment ◦ No infrastructure ◦ Need for navigation  Locating.
3D SLAM for Omni-directional Camera
Draft manual IPS 2.0 Robotics education HM Automation Solution Education.
Does the Quality of the Computer Graphics Matter When Judging Distances in Visually Immersive Environments? Authors: Thompson, Creem-Regehr, et al. Presenter:
TEAM AR.DRONE Final presentation Ingredients AR drone Drone moving API -Compass -front Camera -bottom Camera.
Vrobotics I. DeSouza, I. Jookhun, R. Mete, J. Timbreza, Z. Hossain Group 3 “Helping people reach further”
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Computer Science Department Pacific University Artificial Intelligence -- Computer Vision.
The Hardware Design of the Humanoid Robot RO-PE and the Self-localization Algorithm in RoboCup Tian Bo Control and Mechatronics Lab Mechanical Engineering.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Lecture 23 Dimitar Stefanov. Wheelchair kinematics Recapping Rolling wheels Instantaneous Centre of Curvature (ICC) motion must be consistent Nonholonomic.
Map-Making with a Four-Legged Mobile Robot Exploring Robotics and Robotic Map- making at the Undergraduate Level Ben Willard, Kurt Krebsbach Lawrence University.
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
2/8/2005 TEAM: L.A.R.G.E. Slide: 1 LTU AIBO Research Group Alumni Association Support Request Tuesday February 8, 2005.
Providing User Context for Mobile and Social Networking Applications A. C. Santos et al., Pervasive and Mobile Computing, vol. 6, no. 1, pp , 2010.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Zee: Zero-Effort Crowdsourcing for Indoor Localization
GCAPS Team Design Review CPE 450 Section 1 January 22, 2008 Nick Hebner Kooper Frahm Ryan Weiss.
Using IR For Maze Navigation Kyle W. Lawton and Liz Shrecengost.
Bill Sacks SUNFEST ‘01 Advisor: Prof. Ostrowski Using Vision for Collision Avoidance and Recovery.
Learning Roomba Module 5 - Localization. Outline What is Localization? Why is Localization important? Why is Localization hard? Some Approaches Using.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Group 3 Corey Jamison, Joel Keeling, & Mark Langen
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Paper – Stephen Se, David Lowe, Jim Little
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Map for Easy Paths GIANLUCA BARDARO
Autonomous Robots Key questions in mobile robotics What is around me?
Principle of Bayesian Robot Localization.
Presentation transcript:

1 Panoramic University of Amsterdam Informatics Institute

2 Localization

3 with a Sony Aibo by Jürgen Sturm

4 RoboCup 4-Legged League Sony Aibo Robots 4 vs. 4 robots play fully autonomously Soccer Games Context: Mobile Robots

5 home real-world applications human-machine interaction Fully autonomous robots have to master challenges in unknown & unstructured environments Follow a human, navigate, etc.

6

7 Traditional approaches Aibos / 4-Legged league uses landmarks with known positions, known shape and known color (manually calibration taking hours) General solutions (SLAM) use better hardware Laser range finders Omnidirectional cameras Robots with better odometry (wheels) The problem: Mobile robot localization (estimating the robot’s position)

8 Features of new approach Real-time localization on a Sony Aibo Take advantage of natural features of a room –Independency of artificial landmarks –Auto-calibrating in new environments Idea: –Learn a panoramic model of the surroundings of the robot for localization

9 Color clustering Collect interesting colors (around the robot) Determine 10 most characteristic colors (using an EM clustering algorithm) Raw image (208x160, YCbCr)

10 Sector appearance Divide in vertical slices, called sectors (360° correspond to 80 sectors) Count color transitions per sector (between the 10 most char- acteristic colors of the scene) Approach Building an virtual panoramic wall Raw image (208x160, YCbCr)

11 Learning the panorama model Image features (10-12 sectors/image, 10x10 frequencies/sector) Learn panorama model (estimate frequency distributions per sector) Panorama model (80 sectors, 10x10 distributions, each defined by 5 bins)

12 Alignment and Localization Image features (10-12 sectors/image, 10x10 frequencies/sector) Align with stored panorama model (find shortest path) Output (Rotational estimate Signal-to-noise ratio Confidence range) After learning from 131 frames Robot rotated 45° to the left

13 Experiments in human environments Rotational test in living room (at night) Results Learning of the appearance of unknown & unstructured environments

14 4-Legged soccer field, indoors, single learned spot Translational test on soccer fields Human soccer field, outdoors, single learned spot

15 Multi-spot learning Aibo trained on 4 different spots, yielding 4 different panoramas Aibo kidnapped and placed back at arbitrary positions on the field Aibo tries to walk back to center spot

16 Possibilities for the 4-Legged league Getting rid of all artificial landmarks 11 vs. 11 games (bigger field) Outdoor demonstrations become possible Conclusions

17 Possible usage for the home league Distinguish living room from kitchen or garden Rough but quick map building Find relative position of the TV/stove/etc on this map

18 Other applications CareBot: navigation in a closed indoor environment Mobile applications (for example on cellular phones) for quick positional estimates (tourism)

19 Rotational estimate and Confidence range in numbers

20 Architecture Overview Camera images Odometry updates Sector- based feature extraction and SNR Align with panorama model 0º 90º 180º 270º Orientation buffer + confidence range..

21 Conclusions Accurate estimate of the rotation from a single learned spot (up to 40 meters) A good estimate of the relative distance from a single learned spot (up to 40 meters) Rough estimate of the absolute position from multiple trained spots

22 University of Amsterdam Informatics Institute Panoramic Localization with a Sony Aibo by Jürgen Sturm User manual Head button always resets robot and triggers autoshutter & color clustering Press front button to manually trigger color clustering In training mode: Press middle button to start learning of the first spot Press middle button again to continue learning on more spots Press back button to switch to localization mode In localization mode: Press front button to switch between rotational and translational mode Press middle button to reset panorama and start learning Press back button to switch between find and set-reference mode Fullly working memorystick image can be downloaded from