Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion.

Slides:



Advertisements
Similar presentations
Viktor Zhumatiya, Faustino Gomeza,
Advertisements

Sonar and Localization LMICSE Workshop June , 2005 Alma College.
3D Laser Stripe Scanner or “A Really Poor Man’s DeltaSphere” Chad Hantak December 6, 2004.
By : Adham Suwan Mohammed Zaza Ahmed Mafarjeh. Achieving Security through Kinect using Skeleton Analysis (ASKSA)
Caught in Motion By: Eric Hunt-Schroeder EE275 – Final Project - Spring 2012.
Simultaneous Localization & Mapping - SLAM
Move With Me S.W Graduation Project An Najah National University Engineering Faculty Computer Engineering Department Supervisor : Dr. Raed Al-Qadi Ghada.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
3D Flight Simulator for CE Balaban Nir Lander Shiran Supervisor: Futerman Yan.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Tracking a moving object with real-time obstacle avoidance Chung-Hao Chen, Chang Cheng, David Page, Andreas Koschan and Mongi Abidi Imaging, Robotics and.
Mobile Intelligent Systems 2004 Course Responsibility: Ola Bengtsson.
Robust Lane Detection and Tracking
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Solver & Optimization Problems n An optimization problem is a problem in which we wish to determine the best values for decision variables that will maximize.
Electrical Energy and Electric Potential
Fast Walking and Modeling Kicks Purpose: Team Robotics Spring 2005 By: Forest Marie.
Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?
ROBOT LOCALISATION & MAPPING: MAPPING & LIDAR By James Mead.
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Motion Analysis By Leung Ho Fai 6B(35) Siu Long Kan 6B(25), group2 In this motion, we are going to analysis the motion of “flying disc” in a projectile.
Alien Invasion Bowland Mathematics.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
M ULTIFRAME P OINT C ORRESPONDENCE By Naseem Mahajna & Muhammad Zoabi.
Optical Tracking for VR Bertus Labuschagne Christopher Parker Russell Joffe.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Variable Resolution Vision System in Mobile Robotics Armando Sousa Armando Sousa Paulo Costa, António Moreira Faculdade de Engenharia da.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Visibility Graph. Voronoi Diagram Control is easy: stay equidistant away from closest obstacles.
Stereo Object Detection and Tracking Using Clustering and Bayesian Filtering Texas Tech University 2011 NSF Research Experiences for Undergraduates Site.
PFI Cobra/MC simulator Peter Mao. purpose develop algorithms for fiducial (FF) and science (SF) fiber identification under representative operating conditions.
By: 1- Aws Al-Nabulsi 2- Ibrahim Wahbeh 3- Odai Abdallah Supervised by: Dr. Kamel Saleh.
Wandering Standpoint Algorithm. Wandering Standpoint Algorithm for local path planning Description: –Local path planning algorithm. Required: –Local distance.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
3.1 Solving equations by Graphing System of equations Consistent vs. Inconsistent Independent vs. Dependent.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Games Development 2 Entity Update & Rendering CO3301 Week 2, Part 1.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
Robotics Club: 5:30 this evening
Time (days)Distance (meters) The table shows the movement of a glacier over six days.
The Cyber-Physical Bike A Step Toward Safer Green Transportation.
1 Motion Fuzzy Controller Structure(1/7) In this part, we start design the fuzzy logic controller aimed at producing the velocities of the robot right.
Selection of Behavioral Parameters: Integration of Case-Based Reasoning with Learning Momentum Brian Lee, Maxim Likhachev, and Ronald C. Arkin Mobile Robot.
IPSN 2012 Yu Wang, Rui Tan, Guoliang Xing, Jianxun Wang, and Xiaobo Tan NSLab study group 2012/07/02 Reporter: Yuting 1.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Bill Sacks SUNFEST ‘01 Advisor: Prof. Ostrowski Using Vision for Collision Avoidance and Recovery.
01/19/05© 2005 University of Wisconsin CS 779: Rendering Prof Stephen Chenney Spring 2005
Electrical Energy and Potential AP Physics 2. Electric Fields and WORK In order to bring two like charges near each other work must be done. In order.
1 Long-term image-based motion estimation Dennis Strelow and Sanjiv Singh.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Abstract LSPI (Least-Squares Policy Iteration) works well in value function approximation Gaussian kernel is a popular choice as a basis function but can.
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
The Stingray Example Program CMT3311. Stingray - an example 2D game May be useful as a simple case study Most 2D games need to solve generic problems.
SPACE MOUSE. INTRODUCTION  It is a human computer interaction technology  Helps in movement of manipulator in 6 degree of freedom * 3 translation degree.
COGNITIVE APPROACH TO ROBOT SPATIAL MAPPING
Chauffeur Shade Alabsa.
Paper – Stephen Se, David Lowe, Jim Little
Contents Team introduction Project Introduction Applicability
+ SLAM with SIFT Se, Lowe, and Little Presented by Matt Loper
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Common Classification Tasks
Presentation transcript:

Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion

 Introduction  Tools  Algorithm ◦ GoodFeaturesToTrack ◦ CalcOpticalFlowPyrLK ◦ Evaluate track point ◦ Distance calculation ◦ Odometry correction  Theoretical results  Going live  Conclusion

 Laser sensors are great devices for obstacle detection in robots but come with a high price tag  Cameras, on the other hand, are relatively cheap and easily accessible to everyone (eg. smartphones or digital cameras )  In this project, we used a smartphone’s camera on a robot in order to determine the distance between the robot and its closest obstacle and steer it away in order to avoid collision

 Developed in C++ using Visual Studio  OpenCV library for image analysis  Glued together with the robot through an Android application with the use of JNI and picture from the device’s rear camera ◦ Developed using Nexus 5 device with pictures scaled down to 640x480 pixels.

Overview

cornerSubPix on both vectors from previous step goodFeaturesToTrack on both frames calcOpticalFlowPyrLK Evaluate the trackpoint Calculate distances and correct odometry

GoodFeaturesToTrack

 OpenCV algorithm used in order to find the best points we should keep track of which will help calculate the distance each object has moved

CalcOpticalFlowPyrLK

 Algorithm used in order to see each corner from one frame to where it was mapped in the second frame

Evaluate track point

 Track point is mainly the point which the robot is moving towards  We took each two matching points from the two frames and calculated the line’s equation which passes through them  Intersection point of all these lines is the track point  This is the only point which does not change its position between two consecutive frames and thus we used it in calculating the distance

Distance calculation

 Using the “Pinhole projection formula” and the fact, we get the following formula - object’s distance from the point where the first frame was taken in cm - object’s distance from the point where the second frame was taken in cm - object’s size in pixels in the first frame - object’s size in pixels in the second frame - step size in cm

 Using the aforementioned track point: ◦ Distance in pixels between the track point and a corner in the first frame defines x1 ◦ Distance in pixels between the track point and the corner in the second frame defines x2 ◦ Since we’re using the track point which doesn’t change between the two frames, we are basically measuring the same object’s size

Odometry correction

 As it stands, the results from above calculations were not sufficient  Attempts to improve the calculations included updating the odometry step size per move and thus achieve a more accurate estimate for the distance from an obstacle

 Using the above mentioned equation of and if we denote, where were calculated from the frame’s analysis as explained before, we can write

 Thus, by knowing the average odometry step size we can get the value of and afterwards calculate the value of every and depending on a realistic step size for the move itself and not an average one which depends on all the other moves as well

 We ran several simulations on the provided “Calibration” data set  The goodness of the algorithm was measured by the quality of the graph and the trend line

 First try: ◦ Assuming each step’s size was the given encoder step approximation ◦ We see a lot of inconsistencies in the slope

 Second try: ◦ Using the improved algorithm described in the odometry correction section using n transitions between frames ◦ Outstanding improvement and consistency in the results

 Extra try: ◦ Simulation on reduced input which is closer to the obstacle ◦ Even better results

 Moving towards more practical solution: ◦ Using the given step size for a fixed amount of steps and then using the average step size for the remaining moves

 While integrating the image analysis and processing code we started facing real world problems such as drift and hardware issues

 The robot’s rotations were not always in the same angle despite having the same parameters ◦ Solution: rotating in a constant speed over a fixed amount of time  The solution to the above problem caused another problem by itself: the encoders’ readings got messed up due to misuse by the above solution and the robot’s forward movement got defected. ◦ Solution: none found since we don’t have a direct access to each encoder separately in order to modify it

 Dealing with possible unwanted rotations caused by image analysis hiccups: if(dist < 75) Enter safe mode if(dist <= 25) Rotate if(in safe mode) if(dist >= 55) Move forward if(dist >= 75) Exit safe mode if(dist < 55) Rotate

 Camera’s prices are falling rapidly and they contain rich data just as any other sensor, even in monocular vision systems  Our system is capable of estimating a robot’s distance from its closest obstacle while taking into account many variables such as low resolution images, robot’s drift and encoders’ issues