Quadtree representations

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, Northern Illinois University Template for a.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, University of Illinois at Urbana-Champaign.
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of Applied Health Sciences, University of Illinois at Urbana-Champaign.
A Novel Approach of Assisting the Visually Impaired to Navigate Path and Avoiding Obstacle-Collisions.
Detection and Measurement of Pavement Cracking Bagas Prama Ananta.
Ashray Solanki, Antony Pollail, Lovlish Gupta Undergraduate Students,
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
Applications of Image Processing. Outline What YOU are going to learn Image Processing in use.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
Object Recognition Using Geometric Hashing
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
The Chinese University of Hong Kong. Research on Private cloud : Eucalyptus Research on Hadoop MapReduce & HDFS.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Navigating and Browsing 3D Models in 3DLIB Hesham Anan, Kurt Maly, Mohammad Zubair Computer Science Dept. Old Dominion University, Norfolk, VA, (anan,
Computer Vision. DARPA Challenge Seeks Robots To Drive Into Disasters. DARPA's Robotics Challenge offers a $2 million prize if you can build a robot capable.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Computer vision.
Multi-Sensor Image Fusion (MSIF) Team Members: Phu Kieu, Keenan Knaur Faculty Advisor: Dr. Eun-Young (Elaine) Kang Northrop Grumman Liaison: Richard Gilmore.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
10/7/2015 GEM Lecture 15 Content Photomap -- Mosaics Rectification.
Time Series Data Analysis - I Yaji Sripada. Dept. of Computing Science, University of Aberdeen2 In this lecture you learn What are Time Series? How to.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Interpreting Photographs. Geographers often use photographs when they are studying environments and communities. The photographs most commonly used are:
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, University of Illinois at Urbana-Champaign.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, University of xxxxxxxxxxxxxx Template for a.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Templates, Image Pyramids, and Filter Banks
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
Comparison of Image Registration Methods David Grimm Joseph Handfield Mahnaz Mohammadi Yushan Zhu March 18, 2004.
: Chapter 5: Image Filtering 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Presenter name, Associates and Collaborators Department of Geography, Penn State Template for a 48”x36” poster Acknowledgments Check to make sure you’ve.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, University of Illinois at Urbana-Champaign.
Commonly used logos. Template for a 48” by 36” poster Presenter name, Associates and Collaborators Department of xxxxx, College of xxxxxx, University.
Presenter name, Associates and Collaborators Department of XXXXXXXXXXXXXXXX, College of XXXXXXXXXXXXXXXXXX, University of Illinois at Urbana-Champaign.
Within our Elementary Education questionnaire, the former students were asked about how prepared and satisfied they were with what they learned in their.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Template for a 48”x36” poster
Template for a 48”x36” poster
Advanced Computer Systems
Visual homing using PCA-SIFT
Signal and Image Processing Lab
Commonly used logos.
Open Source Field Repairable 3D Printed Drone Design
Template for a 48”x36” poster
Paper – Stephen Se, David Lowe, Jim Little
Commonly used logos.
Template for a 48”x36” poster
Template for a 48”x36” poster
Commonly used logos.
Template for a 48”x36” poster
Commonly used logos.
دکتر سعید شیری قیداری & فصل 4 کتاب
Computer Vision Lecture 5: Binary Image Processing
Template for a 48”x36” poster
By: Kevin Yu Ph.D. in Computer Engineering
IMAGE MOSAICING MALNAD COLLEGE OF ENGINEERING
Introduction Computer vision is the analysis of digital images
Ashray Solanki, Antony Pollail, Lovlish Gupta Undergraduate Students,
Image Processing, Lecture #8
Image Processing, Lecture #8
Template for a 48”x36” poster TITLE AREA
Template for a 11”x17” poster
Introduction to Operating Systems
Templates and Image Pyramids
Presentation transcript:

Quadtree representations Image Based Navigation System Fardin Abdi, Tanishq Dubey, Jeffrey Huang and Dongwei Shi Department of Computer Science , College of Engineering, University of Illinois at Urbana-Champaign Method and Storage Firstly, we have to translate the human readable image to machine readable image. The most efficient way is to use the edge detection method. Edge Detection and Template Matching Edge Detection In order to accomplish useful and accurate Template Matching, a image must first be Edge Detected. This process is quite simply does as it says, it attempts to detect the edges in a image. This is done by first applying a slight Gaussian filter to the image in order to remove noise. This blurred image is thien applied to a Canny Algorithm that finds the points of high relative intensity in an image and returns a greyscale map of this intensity. Template Matching The process of finding an image within an image is referred to as template matching. This is accomplished by “sliding” a template image over a source image and recording how well the template matches the source. The location with the best match is the location of the template in the source. In order to make this method work at varying altitudes, our method implements scale invariant template matching. This is done by changing the scale of the source image and continuously matching to find the location of greatest correlation. Future Work In order to make the edge detection and storing process more accurate and efficient, we still have to working on the following problems. The first one is to get the accurate greyscale map in a cloudy weather. The most reliable way for us is to lower the altitude of the plane by manipulating the autopilot system. The second problem is to stable the on-board camera by a vertical direction to the ground. Perhaps, we can apply gyroscopes and other subsidiary sensors to control the on-board camera taking photos as soon as it is pointing to the ground. Introduction   UAVs today are assisted by many sensors - GPS, Compass, Sonar - however, these sensors are prone to error or even failure. Our task is to design a system with quite rigid restrictions on computational capacity, power usage, weight, and computational speed, to use machine vision and image processing in order to allow a UAV to find its position based only on a photo it has taken and a database of preprocessed images it has. This feature can be used as either the only method of positioning or can even be used as a fallback method for navigation system. the left image is an original photograph, the right image is the edge detection applied. ( from:http://en.wikipedia.org/wiki/Edge_detection) An MQ-9 Reaper, a hunter killer surveillance UAV( from http://en.wikipedia.org/wiki/Unmanned_aerial_vehicle) After the edge detection we came across another problem. With an embedded system, we have limited resources with both memory and computational power. In order to circumvent this we stored the edge detected images similar to a quadtree structure in physical storage. Splitting images into smaller 100x100 pieces we were able to keep them in directories (NW/SE/SW/NE) and recreated pars of the image as needed instead of searching through the large image in memory by joining the smaller 100x100 parts together. Results (Images) Results and Conclusions: Results As can be seen from the images on the left, our system was able to take raw satellite imagery, and live imagery from the drone and identify where the drone was on a macroscopic scale. These results can be repeated wth high accuracy and can be computed in an acceptable amount of time, allowing a UAV to navigate to its location. Conclusions: All in all, we have developed a robust, efficient, and accurate method of finding a UAV’s location based only on offline satellite imagery and photos taken by the UAV itself. This Image Based Navigation System then offers itself to be a powerful component of either a backup navigation system or a primary navigation system. Further testing and analysis needs to be completed in order to guarantee the accuracy of the system and the codebase must be optimized so that the algorithms can run faster, so that latency between when the image is taken and when a location is returned is at a minimum. A DJI Phantom UAV for commercial and recreational aerial photography ( from http://en.wikipedia.org/wiki/Unmanned_aerial_vehicle) Aim   There are two aims. The first aim is to come up with an algorithm which compares the satellite images (from google earth) with current view from UVA’s camera. The second aim is to simplify the algorithm which can run and compile on limited source like, “Odroid board” on UVA Quadtree representations http://web.stanford.edu/class/ee368b/Projects/embrya/ Captions set in a serif style font such as Times, 18 to 24 size, italic style. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat. Odroid board which is used on UVA (from http://www.hardkernel.com/main/main.php) The implementation of how to splicing the tree. (from:http://upcommons.upc.edu/pfc/bitstream/2099.1/15859/3/exchange_internship_uj.pdf.txt ) Captions set in a serif style font such as Times, 18 to 24 size, italic style. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat.