Anchoring AI via Robots and ROS A. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14.

Slides:



Advertisements
Similar presentations
Sonar and Localization LMICSE Workshop June , 2005 Alma College.
Advertisements

Real-time, low-resource corridor reconstruction using a single consumer grade RGB camera is a powerful tool for allowing a fast, inexpensive solution to.
CptS 483: Introduction to Robotics. Born: VT Raised: NH High school: CT College: MA Worked: WI Grad School: TX Postdoc: CA Taught: PA NOW: WA Research.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right.
Introduction to Robotics In the name of Allah. Introduction to Robotics o Leila Sharif o o Lecture #2: The Big.
Introduction to the State-Level Mitigation 20/20 TM Software for Management of State-Level Hazard Mitigation Planning and Programming A software program.
Cognitive Colonization The Robotics Institute Carnegie Mellon University Bernardine Dias, Bruce Digney, Martial Hebert, Bart Nabbe, Tony Stentz, Scott.
The MU Mites Robot Team Marjorie Skubic Derek Anderson Srikanth Kavirayani Mohammed Khalilia Benjamin Shelton Computational Intelligence Research Lab University.
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.
Experiences with an Architecture for Intelligent Reactive Agents By R. Peter Bonasso, R. James Firby, Erann Gat, David Kortenkamp, David P Miller, Marc.
1 Comp300a: Introduction to Computer Vision L. QUAN.
An experiment on squad navigation of human and robots IARP/EURON Workshop on Robotics for Risky Interventions and Environmental Surveillance January 7th-8th,
Lecture 25 Dimitar Stefanov.
Cpt_S 483: Introduction to Robotics. Dana 3 RoboSub Robot Club Faunc Robot.
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Introduction to Machine Vision Systems
Institute of Perception, Action and Behaviour (IPAB) Director: Prof. Sethu Vijayakumar.
MITRE Corporation is a federally-funded research-and- development corporation that has developed their own facial recognition system, known as MITRE Matcher.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
Activity 1: Multi-sensor based Navigation of Intelligent Wheelchairs Theo Theodoridis and Huosheng Hu University of Essex 27 January 2012 Ecole Centrale.
Robot Vision SS 2007 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
Waikato Margaret Jefferies Dept of Computer Science University of Waikato.
Integrating Visualization Peripherals into Power-Walls and Similar Tiled Display Environments James Da Cunha Savannah State University Research Alliance.
Multiple Autonomous Ground/Air Robot Coordination Exploration of AI techniques for implementing incremental learning. Development of a robot controller.
오 세 영, 이 진 수 전자전기공학과, 뇌연구센터 포항공과대학교
Robotics at HMC: Summer 2008 Devin Smith '09, Hannah Hoersting '09, Lesia Bilitchenko '10 (Cal Poly Pomona), Sabreen Lakhani '11, Becky Green '11, Pam.
Engaging Undergraduate Students with Robotic Design Projects James O. Hamblen School of ECE, Georgia Tech, Atlanta, GA
Intelligent Mobile Robotics Czech Technical University in Prague Libor Přeučil
COMP 4640 Intelligent & Interactive Systems Cheryl Seals, Ph.D. Computer Science & Software Engineering Auburn University Lecture 2: Intelligent Agents.
Low-Cost Localization for Educational Robotic Platforms via an External Fixed-Position Camera Drew Housten Dr. William Regli
CS 5 for all! Bridgette Eichelburger ’14, David Lingenbrink ‘14, Yael Mayer ‘11, Obosa Obazuaye ‘14, Becca Thomas ‘14, Maia Valcarce ‘13, Joshua Vasquez.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
Robotics at HMC: Summer 2007 Vedika Khemani '10, Rachel ArceJaeger '10, Jessica Wen '10, Morgan Conbere '08, Lilia Markham '08, Cord Melton '09 (UChicago),
 Supervised by Prof. LYU Rung Tsong Michael Student: Chan Wai Yeung ( ) Lai Tai Shing ( )
Welcome to Robotics! Spring 2007 Sarah Lawrence College Professor Jim Marshall.
By: 1- Aws Al-Nabulsi 2- Ibrahim Wahbeh 3- Odai Abdallah Supervised by: Dr. Kamel Saleh.
Responding to the Unexpected Yigal Arens Paul Rosenbloom Information Sciences Institute University of Southern California.
Robotics Education in Emerging Technology Regions G. Ayorkor Mills-Tettey Robotics Institute, Carnegie Mellon University M. Bernardine Dias, Brett Browning.
Robotics Sharif In the name of Allah. Robotics Sharif Introduction to Robotics o Leila Sharif o o Lecture #2: The.
A Multidisciplinary Approach for Using Robotics in Engineering Education Jerry Weinberg Gary Mayer Department of Computer Science Southern Illinois University.
Multi-Robot System Application
Tapia Robotics 2009: Import Antigravity Kate Burgers '11, Becky Green '11, Sabreen Lakhani '11, Pam Strom '11 and Zachary Dodds In the summer of 2008,
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Project Overview: The purpose of this project is to make use of existing robotic behaviors to develop intuitive, easy to use robot / human interfaces.
One reason for this is that curricular resources for robot mapping are scarce. This work fills the gap between research code, e.g., at openslam.org, and.
Life Computes: A simultaneous Eliot Bush, Zachary Dodds, Ran Libeskind-Hadas, Harvey Mudd College Curriculum If CS had a central dogma, it would probably.
Augmented Reality and 3D modelling Done by Stafford Joemat Supervised by Mr James Connan.
Auto-Park for Social Robots By Team Daedalus. Requirements for FVE Functional Receive commands from user via smartphone app Share data with other cars.
Alan Cleary ‘12, Kendric Evans ‘10, Michael Reed ‘12, Dr. John Peterson Who is Western State? Western State College of Colorado is a small college located.
Software Narrative Autonomous Targeting Vehicle (ATV) Daniel Barrett Sebastian Hening Sandunmalee Abeyratne Anthony Myers.
Cpt_S 483: Introduction to Robotics. Dana 3 RoboSub Robot Club Research Lab.
Oceanobservatories.org Funding for the Ocean Observatories Initiative is provided by the National Science Foundation through a Cooperative Agreement with.
Accessible Aerial Autonomy via ROS Nick Berezny ’12, Lilian de Greef ‘12, Brad Jensen ‘13, Kimberly Sheely ‘12, Malen Sok ‘13, and Zachary Dodds Tasks.
Planning Strategies RSS II Lecture 6 September 21, 2005.
Making Research Tools Accessible for All AI Students Zach Dodds, Christine Alvarado, and Sara Sood Though a compelling area of research with many applications,
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Robot Vision SS 2009 Matthias Rüther ROBOT VISION 2VO 1KU Matthias Rüther.
FYP titles By Prof. KH Wong FYP v6.31.
What are Training Paths and how to construct them
CS b659: Intelligent Robotics
Robotic Guidance.
PixelLaser: Range from texture
Jörg Stückler, imageMax Schwarz and Sven Behnke*
PixelLaser: Range from texture
Introduction to Robot Mapping
Distributed Sensing, Control, and Uncertainty
Plankton Classification VIDI: Sign Recognition HANDWRITING RECOGNITION
Navigation System on a Quadrotor
Presentation transcript:

Anchoring AI via Robots and ROS A. Dobke ’14, D. Greene ‘13, D. Hernandez '15, C. Hunt ‘14, M. McDermott ‘14, L. Reed '14, V. Wehner '14, A. Wilby '14 and Z. Dodds Navigation and Planning Platforms/Tasks Amid the increase in exemplary online lectures, assignments, and communities in CS and AI, bricks-and-mortar institutions will increasingly assert their value through the labs and situated experiences they provide. This work highlights inexpensive robots that, along with ROS, we have used to scaffold both CS and AI, spanning from CS1 to open-ended investigations at three institutions. Multirobot Coordination Acknowledgments We use the Kinect's depth images to implement a corridor follower that is capable of freely wandering the Libra Complex without external guidance. The state-machine delegates high-level decisions to the map (below), while tracking the type of surroundings the robot is currently facing. We gratefully acknowledge funds from The Rose Hills Foundation, the NSF projects REU CNS & CPATH , and HMC. Scalable labs: Robots + ROSLocalization and Convoys ROS’s flexible scaffolding Corridor-following snapshots and corresponding controller states. In addition to the Muddbot and drones, ROS enables the use of other platforms. We have implemented panorama-based localization and control on a nerf launcher using OpenCV SURF features. At left, a three-robot convoy demonstrates a follow-the-leader task. Message-passing allows the convoy of robots to handle failures (the obstacle at right) by starting recovery routines when the team is disrupted. This project’s resources are fleixble enough to support a variety of research and educational goals. Western State and Glendale colleges have adapted some of the curriculum, hw, and sw for their CS + AI courses. Corridor- following We use OpenCV to draw a map that represents a network of corridors known as the “Libra Complex” at Mudd. With odometry and estimated landmarks, we can localize the robot (or two) in the map at the same time that the system plans paths for them. Map Management The map computes the best path, shows it, and guides robot turns. The map also assists localization by simulating range sensors’ values. Visual servoing Landmark-based navigation Color and SURF-based vision Kinect-based control MuddBot: iRobot Create Kinect on pan and/or tilt AR.Drone 1 and 2: $300 wi-fi device two-camera vision + accel. ROS exposes our platforms’ physical capabilities. On one hand, ROS allows us to hide details: in CS1 and CS2 we treat the sensors and actuators as black boxes. For AI work, however, we can immediately access its many libraries: Line-following Odometric mapping Galileo Foyer Olin Beckman Parsons Keck Jacobs blue ~ carpet gray ~ cement white ~ tile A gmapped Libra Complexa 3d rendering from gmapping The Create’s encoders and FOVIS visual odometry have complementary strengths. Below, maps from a MuddBot and ROS. At tip is our nerf launcher and webcamera. Below are, first, a panorama maps and an example of SURF-based localization within it, along with a desired view (green diamond). The second is the result after image-based navigation. novel imagepanorama map, SURF matches, and localization result MuddBot flips hoop-jumping