Robotics at HMC: Summer 2008 Devin Smith '09, Hannah Hoersting '09, Lesia Bilitchenko '10 (Cal Poly Pomona), Sabreen Lakhani '11, Becky Green '11, Pam.

Slides:



Advertisements
Similar presentations
Project Title Here IEEE UCSD Overview Robo-Magellan is a robotics competition emphasizing autonomous navigation and obstacle avoidance over varied, outdoor.
Advertisements

Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Accessible Aerial Autonomy? Lilian de Greef, Brad Jensen, Kim Sheely Nick Berezny, Cal Poly Pomona '12 Malen Sok, Cal Poly Pomona '13 special guest star!
ICRA 2002 Topological Mobile Robot Localization Using Fast Vision Techniques Paul Blaer and Peter Allen Dept. of Computer Science, Columbia University.
Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
An appearance-based visual compass for mobile robots Jürgen Sturm University of Amsterdam Informatics Institute.
Vision Based Control Motion Matt Baker Kevin VanDyke.
Mini Grand Challenge Contest To Enhance CS Education Bob Avanzato Associate Professor of Engineering Penn State Abington 1600 Woodland Road Abington PA.
Simultaneous Localization & Mapping - SLAM
Bohr Robot Group OpenCV ECE479 John Chhokar J.C. Arada Richard Dixon.
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
A Mobile-Cloud Pedestrian Crossing Guide for the Blind
Abstract This project focuses on realizing a series of operational improvements for WPI’s unmanned ground vehicle Prometheus with the end goal of a winning.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Sponsors Mechanical Improvements Software The software is written in C++ and Python using the Robot Operating System (ROS) framework. The ROS tool, rviz,
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Christian Siagian Laurent Itti Univ. Southern California, CA, USA
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
POLI di MI tecnicolano VISION-AUGMENTED INERTIAL NAVIGATION BY SENSOR FUSION FOR AN AUTONOMOUS ROTORCRAFT VEHICLE C.L. Bottasso, D. Leonello Politecnico.
Anchoring AI via Robots and ROS A. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14.
Intelligent Ground Vehicle Competition Navigation Michael Lebson - James McLane - Image Processing Hamad Al Salem.
Kalman filter and SLAM problem
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
A Brief Overview of Computer Vision Jinxiang Chai.
Presented by: Z.G. Huang May 04, 2011 Did You See Bob? Human Localization using Mobile Phones Romit Roy Choudhury Duke University Durham, NC, USA Ionut.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
Activity 1: Multi-sensor based Navigation of Intelligent Wheelchairs Theo Theodoridis and Huosheng Hu University of Essex 27 January 2012 Ecole Centrale.
Robot Compagnion Localization at home and in the office Arnoud Visser, Jürgen Sturm, Frans Groen University of Amsterdam Informatics Institute.
Localisation & Navigation
GCAPS Team Design Review CPE 450 Section 1 January 21, 2008 Nick Hebner Kooper Frahm Ryan Weiss.
Ruslan Masinjila Aida Militaru.  Nature of the Problem  Our Solution: The Roaming Security Robot  Functionalities  General System View  System Design.
Low-Cost Localization for Educational Robotic Platforms via an External Fixed-Position Camera Drew Housten Dr. William Regli
Page: 1 PHAM VAN Tien Real-Time Approach for Auto-Adjusting Vision System Reading Class International Graduate School of Dynamic Intelligent Systems.
3D SLAM for Omni-directional Camera
CS 5 for all! Bridgette Eichelburger ’14, David Lingenbrink ‘14, Yael Mayer ‘11, Obosa Obazuaye ‘14, Becca Thomas ‘14, Maia Valcarce ‘13, Joshua Vasquez.
Robotics at HMC: Summer 2007 Vedika Khemani '10, Rachel ArceJaeger '10, Jessica Wen '10, Morgan Conbere '08, Lilia Markham '08, Cord Melton '09 (UChicago),
November 10, 2004 Prof. Christopher Rasmussen Lab web page: vision.cis.udel.edu.
Robust localization algorithms for an autonomous campus tour guide Richard Thrapp Christian Westbrook Devika Subramanian Rice University Presented at ICRA.
Young Ki Baik, Computer Vision Lab.
Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion.
No room for robots? Future of Robots in Education, 3/4/09 Zachary DoddsHarvey Mudd College strategies for sneaking them in anyway…
Mini Grand Challenge Contest for Robot Education Bob Avanzato Associate Professor of Engineering Penn State Abington 1600 Woodland Road Abington PA
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
MIR – Mobile Intelligence Robot By Jason Abbett and Devon Berry.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Tapia Robotics 2009: Import Antigravity Kate Burgers '11, Becky Green '11, Sabreen Lakhani '11, Pam Strom '11 and Zachary Dodds In the summer of 2008,
Turning Autonomous Navigation and Mapping Using Monocular Low-Resolution Grayscale Vision VIDYA MURALI AND STAN BIRCHFIELD CLEMSON UNIVERSITY ABSTRACT.
Robotics Club: 5:30 this evening
One reason for this is that curricular resources for robot mapping are scarce. This work fills the gap between research code, e.g., at openslam.org, and.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
GCAPS Team Design Review CPE 450 Section 1 January 22, 2008 Nick Hebner Kooper Frahm Ryan Weiss.
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
Software Narrative Autonomous Targeting Vehicle (ATV) Daniel Barrett Sebastian Hening Sandunmalee Abeyratne Anthony Myers.
Intro / Motivation / Magnify 360 screenshots (proably not the Facebook Screenshots down to here Evaluation From clickstream data: Profiles Classification.
Accessible Aerial Autonomy via ROS Nick Berezny ’12, Lilian de Greef ‘12, Brad Jensen ‘13, Kimberly Sheely ‘12, Malen Sok ‘13, and Zachary Dodds Tasks.
Making Research Tools Accessible for All AI Students Zach Dodds, Christine Alvarado, and Sara Sood Though a compelling area of research with many applications,
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
Learning minimal representations for visual navigation Dr. William H. Warren Dept. of Cognitive & Linguistic Sciences Dr. Leslie Kaelbling Dept. of Computer.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
Paper – Stephen Se, David Lowe, Jim Little
PixelLaser: Range from texture
PixelLaser: Range from texture
ISOMAP TRACKING WITH PARTICLE FILTERING
Jetson-Enabled Autonomous Vehicle ROS (Robot Operating System)
Jetson-Enabled Autonomous Vehicle
Presentation transcript:

Robotics at HMC: Summer 2008 Devin Smith '09, Hannah Hoersting '09, Lesia Bilitchenko '10 (Cal Poly Pomona), Sabreen Lakhani '11, Becky Green '11, Pam Strom '11, Kate Burgers '11, Zeke Koziol '10, Elaine Shaver '09, Peter Mawhorter '08, and Zachary Dodds Students at Harvey Mudd have the opportunity to engage in robotics projects and research throughout their time at HMC. In the summer of 2008, for example, three different projects involving first-year, sophomore, junior, and senior students resulted in  Three novel, low-cost robot platforms leveraging off-the-shelf hardware for both indoor (Qwerkbot, Create) and outdoor (PowerWheels) autonomy  Mapping implementations (FastSLAM) accessible enough to run on any platform with landmark-recognition capabilities, e.g., Scribbler and Create  An investigation of the capabilities of image profiles as a basis for vision- only robot autonomy, including odometry, control, and mapping  Successful entry into 2008's AAAI robot exhibition (Chicago; July, 2008) and (possibly successful) to Tapia's robotics contest (Portland; April, 2009). This work was made possible by funds from NSF DUE # , the Institute for Personal Robots in Education (IPRE), the Baker foundation, and funds and resources from Harvey Mudd College. The Tapia 2009 Robotics Competition AAAI '08 Robot Exhibition: Mapping for All Our platforms The Association for the Advancement of Artificial Intelligence (AAAI) sponsors one of the longest-running robot venues in the world. Elaine, Peter, and Zeke exhibited at AAAI this summer; their mapping platforms earned the blue ribbon for education for map-building atop three low-cost platforms; the robots smoothly scale across pedagogical levels: they engaged middle-school girls who visited the event (below) and will be used in HMC's Fall '08 seminar for incoming students entitled Robotics: hardware and software. The AAAI venue Acknowledgments The power of image profiles Closing the loop: mapping via image profiles The vision: Vision-only vehicles Indoor mapping platforms at HMC and an example run of FastSLAM's map-updating: Do image profiles suffice to build consistent maps of the environment? Hannah Hoersting '09 and Lesia Bilitchenko '10 designed, implemented, and tested algorithms for matching locations through their image profiles. These matches, in turn, enable large-scale correction of the inevitable drift from incremental odometric data. Multiple-hypothesis tracking allows future observations to correct ambiguous past data. The result is a topologically consistent map for which shortest-path graph algorithms suffice to navigate. A team of four first-year students designed, built, and tested an iRobot Create- based entry for the upcoming Tapia robotics competition, held in Portland, OR in April The task is a search for "survivors" in a partially unknown environment: their solution merges hardware and software through several new Robotics at HMC systems: an OpenCV-based landmark recognition system, a Python GUI to facilitate testing, and a Java mapping interface, all integrated into a state machine that guides and governs landmark-finding and returns the robot to the start entirely autonomously. How will it do? We'll see…. Visual input offers substantial promise as a robotic sensor -- indeed, there is too much information captured by the 2d images of a scene: disentangling the contributions of lighting, object characteristics, and 3d geometry is challenging, to say HMC earned the AAAI '08 blue ribbon for education among the 17 robot exhibits. Middle-school visitors to the conference tried out the robots. offers HMC students a chance to deeply consider AI. HMC alums now study AI, vision, and robotics at CMU, UCSD, UW, UCLA, Utah, Oregon State, and Duke. An overhead map built from OpenCV images taken by a Create robot Cameras are cheap and available sensors, and natural agents (us!) offer a tantalizing vision: autonomous navigation and spatial reasoning using visual data alone. Each of the platforms investigated at HMC use pixels as their primary source of information about their surroundings. 2008's projects extend earlier HMC work on the foundations of vision- only vehicles. 2007's projects included revamping the OpenCV library's support for importing and exporting images on Mac OS X; three- dimensional reconstruction from image sequences; and deploying robots atop these capabilities. We look forward to layering further capabilities atop this year's student work during the summer of 2009 and beyond. Raw visual odometry (top), en route (left) to a corrected map (right), through profile matching Indoor Students have built, modified, and programmed several platforms for indoor exploration. At left are images of landmark-based mapping on an iRobot Create through a laptop onboard. Using the OLPC platform (left) yields a remarkably accessible One Robot Per Child. The Scribbler/Fluke combination (below, left) is even more cost-effective, while the Qwerk-based robot offers an array of sonars and camera- panning capabilities. Map-building With vision onboard, these systems build maps by integrating landmark sightings FastSLAM uses both Kalman and particle filters in order to cull Outdoor HMC has investigated accessible outdoor robotics via several PowerWheels vehicles. Equipped with cameras for path following, GPS for position tracking, and sonars for avoiding obstacles, these low-cost platforms highlight both the engineering and computational capabilities of HMC students. With these we run a local version of Penn State Abington's Mini Grand Challenge. the least. Devin Smith '09 set out to investigate the limits of image profiles, feature vectors of pixel sums. Matching these sums yields video-only odometry and enables a camera-based virtual compass. Intra-image differences suggest interesting destinations and can also control velocity and avoid walls, resulting in a vision- only system with full autonomy. Larger-scale mapping requires matching locations. That effort progressed in parallel in our REU: Snapshots of real-time compass and map updating, along with the profile-based interest operator choosing the robot's next heading to be the hallway's end. HMC's entry to the 2009 Tapia robotics competition, center. Other images show the robot's landmark recognition, the team's mapping interface. Also, the project in action at HMC and inaction at Panera. possibilities from a population of hypotheses. Such survival- of-the-fittest approaches can adapt to the computational resources currently available. On the left the red circle represents the robot's (badly incorrect) odometry and the green show estimated true positions within their maps' landmarks, as the One-robot-per-child platform navigates among the landmarks to the right. after matching one segmentafter matching all segments the raw odometry from a four-loop run centering and approaching a new marker parallel-park adjustment marker ID'ed; reverting to wall-following Elaine Shaver '09 guiding middle- school visitors to AAAI as they problem-solve with HMC's accessible platforms. The plots below compare our visual odometry (right) with RatSLAM's original, at left. Our approach better estimates sharp corners and overall topology, as the robot did, in fact, intersect its path where shown: The yellow curve is an image profile, the intensity sum of the image's pixels; the red curve measures the absolute changes in that yellow profile - it offers a heuristic for identifying "interesting" headings for the robot to pursue. 15 video frames and about 15° separate these images, creating a visual compass from profiles. … matching another image from leg 10 An image from leg 6… robot markers map state sonars