Developing Artificial Intelligence in Robotics

Slides:



Advertisements
Similar presentations
CSE 424 Final Presentation Team Members: Edward Andert Shang Wang Michael Vetrano Thomas Barry Roger Dolan Eric Barber Sponsor: Aviral Shrivastava.
Advertisements

Hand Gesture for Taking Self Portrait Shaowei Chu and Jiro Tanaka University of Tsukuba Japan 12th July 15 minutes talk.
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
A vision-based system for grasping novel objects in cluttered environments Ashutosh Saxena, Lawson Wong, Morgan Quigley, Andrew Y. Ng 2007 Learning to.
Computer Vision REU Week 2 Adam Kavanaugh. Video Canny Put canny into a loop in order to process multiple frames of a video sequence Put canny into a.
Robotics Simulator Intelligent Systems Lab. What is it ? Software framework - Simulating Robotics Algorithms.
I-SOBOT SOCCER Padmashri Gargesa Intelligent Robotics I I (Winter 2011)
Computer Vision. Computer vision is concerned with the theory and technology for building artificial Computer vision is concerned with the theory and.
Camera Aided Robot Progress Report.
Introduce about sensor using in Robot NAO Department: FTI-FHO-FPT Presenter: Vu Hoang Dung.
Autonomous Surface Navigation Platform Michael Baxter Angel Berrocal Brandon Groff.
Active Display Robot System Using Ubiquitous Network Byung-Ju Yi Hanyang University.
CSCE 5013 Computer Vision Fall 2011 Prof. John Gauch
EEL 5666: Intelligent Machine Design Laboratory Final Presentation by Rob Hamersma April 12, 2005.
PR2 Christian Sanchez MIS 304. PR2 Video Willow Garage Research lab in Menlo Park, California Develop hardware and open source software (for personal.
 Supervised by Prof. LYU Rung Tsong Michael Student: Chan Wai Yeung ( ) Lai Tai Shing ( )
Real-Time Cyber Physical Systems Application on MobilityFirst Winlab Summer Internship 2015 Karthikeyan Ganesan, Wuyang Zhang, Zihong Zheng.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
QUADCOPTER- Vision Based Object Tracking By By Pushyami Kaveti Pushyami Kaveti.
Child-sized 3D Printed igus Humanoid Open Platform Philipp Allgeuer, Hafez Farazi, Michael Schreiber and Sven Behnke Autonomous Intelligent Systems University.
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
AN INTELLIGENT ASSISTANT FOR NAVIGATION OF VISUALLY IMPAIRED PEOPLE N.G. Bourbakis*# and D. Kavraki # #AIIS Inc., Vestal, NY, *WSU,
Academic and pedagogical options in CIM laboratory CIM in universities.
Members: Nicholas Allendorf - CprE Christopher Daly – CprE Daniel Guilliams – CprE Andrew Joseph – EE Adam Schuster – CprE Faculty Advisor: Dr. Daji Qiao.
Nir Mendel, Yuval Pick & Ilya Roginsky Advisor: Prof. Ronen Brafman
Final Year Project. Project Title Kalman Tracking For Image Processing Applications.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Bill Sacks SUNFEST ‘01 Advisor: Prof. Ostrowski Using Vision for Collision Avoidance and Recovery.
Team 1617: Autonomous Firefighting Robot Katherine Drogalis, Electrical Engineering Zachariah Sutton, Electrical Engineering Chutian Zhang, Engineering.
FYP titles By Prof. KH Wong FYP v6.31.
OpenCV C++ Image Processing
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Lunabotics Navigation Package Team May14-20 Advisor: Dr. Koray Celik Clients: ISU Lunabotics Club, Vermeer Company.
Alexis Maldonado & Georg Bartels
Implementing Localization
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Introduction of Real-Time Image Processing
Robot Laser Tag Team 1: Matt Pruitt, Dirk Van Bruggen, and Jay Doyle
Advisor: Prof Ronen Brafman
Game Strategy for APC 2016.
ETHAN BROWN / KATE RAFTER
Case Study Autonomous Cars 9/21/2018.
Senior Capstone Project Gaze Tracking System
Jörg Stückler, imageMax Schwarz and Sven Behnke*
Funny Face Application
TIGERBOT 2 REJUVENATION
Robot Operating System (ROS) Framework
Fusion, Face, HD Face Matthew Simari | Program Manager, Kinect Team
--- Stereoscopic Vision and Range Finders
--- Stereoscopic Vision and Range Finders
Networks of Autonomous Unmanned Vehicles
Mixed Reality Server under Robot Operating System
Identifying Confusion from Eye-Tracking Data
Zicheng Wan and Yuan Gao CPSC 6820, Clemson University
FACIAL EXPRESSION RECOGNITION USING SWARMS
CS2310 Zihang Huang Project :gesture recognition using Kinect
Progress Review.
Distributed Sensing, Control, and Uncertainty
Vision Tracking System
Case Study Autonomous Cars 1/14/2019.
ECE 477 Digital Systems Senior Design Project  Spring 2006
Elecbits Electronic shade.
Quick Introduction to ROS
Robotics and Perception
Robotic Perception and Action
Nanyang Technological University
Midway Design Review Team 16 December 6,
Comprehensive Design Review
Robot Operating System (ROS): An Introduction
Presentation transcript:

Developing Artificial Intelligence in Robotics Viktoria Golobev & Alina Marchenko Advisor: Prof. Ronen Brafman

Teaching Robots To use the elevator What is it good for? Help disabled people. Delivering mail and packages. Buying grocery. Guiding people in hospitals.

Steps and challenges Detecting the button Calculating path to the button Pushing the button Going back to driving position Recognizing the elevator’s arrival and entering the elevator.

Robot Operating System (Ros) ROS is a trending robot application development platform that provides various features such as message passing, distributed computing, code reusing, and so on. The platform is based on nodes - processes that perform computation using ROS client libraries. The ROS message-passing middleware allows communicating between different nodes through topics. All the communication passes through ROS Master, which is much like a DNS server. The master provides name registration and lookup to the rest of the nodes.

The Robot 2 x HD Cameras Kinect 2 RGBD Camera (mounted on a Pan-Tilt system) SR300 RGBD camera (Above gripper) GPS 9-DoF IMU 3 x Ultrasonic Range Finders (left, rear, right) Emergency button Battery meter

Step 1- detecting the button To recognize the elevator’s button, we put a red sticker on it. The Kinect2 camera publishes messages of type sensor_msgs/PointCloud2 to topic /kinect2/qhd/points. We use OpenCV — a library for real-time computer vision — to recognize red objects We filter objects with the wrong size The PCL (POINT CLOUD) library processes the image and gives us its coordinates With moment function we find the center of the button

Step 2 - Calculating path to the button The button coordinates are with respect to the head camera We use TF — a package that tracks of multiple coordinate frames over time — to convert to base coordinates If the button is to high, we lift armadillos torso to get it. If the button is out of reach, the robot goes back to driving position

Step 3 – Pushing the button We add the button to the robots collision matrix

Step4 - Elevator opening recognition and entering the elevator Armadillo uses a laser scanner to recognize the elevators arrival. The scanner publishes massages of type sensor_msgs/LaserScan.msg to the topic /scan. The sensor_msgs/LaserScan message contains the information about the distance measurements between the robot and it’s surroundings. When the elevator is open the distance grows. We use moveit which is a motion planning framework and gmapping to calculate a safe path into the elevator.

The End