Object Tracking Using Autonomous Quad Copter Carlos A Munoz, Advisor: Dr. Tarek Sobh Robotics, Intelligent Sensing & Control (RISC) Lab., School of Engineering,

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
SOUTHEASTCON I KARMA ECE IEEE SoutheastCon Hardware Competition Must build an autonomous robot that can –Start at rest at the Starting Station.
Real-Time Hand Gesture Recognition with Kinect for Playing Racing Video Games 2014 International Joint Conference on Neural Networks (IJCNN) July 6-11,
By : Adham Suwan Mohammed Zaza Ahmed Mafarjeh. Achieving Security through Kinect using Skeleton Analysis (ASKSA)
Electronic Pitch Trainer Abstract: A baseball pitch has many properties that vary from pitch-to-pitch. Some of the more apparent properties are the release.
Page 1 SIXTH SENSE TECHNOLOGY Presented by: KIRTI AGGARWAL 2K7-MRCE-CS-035.
Simple Face Detection system Ali Arab Sharif university of tech. Fall 2012.
Presented by Nikhil Mohan Narsapur 1ve07ec069.  Hawk-Eye is a used to track the path of the ball.  Hawk-Eye is a used to track the path of the ball.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE.
Autonomous Vehicle Pursuit of Target Through Optical Recognition Vision & Image Science Laboratory, Department of Electrical Engineering,Technion Daniel.
Object Tracking Using Autonomous Quad Copter Carlos A. Muñoz Robotics, Intelligent Sensing & Control (RISC) Laboratory, School of Engineering, University.
HCI Final Project Robust Real Time Face Detection Paul Viola, Michael Jones, Robust Real-Time Face Detetion, International Journal of Computer Vision,
Intelligent Ground Vehicle Competition 2006 Brigham Young University.
Tracking Migratory Birds Around Large Structures Presented by: Arik Brooks and Nicholas Patrick Advisors: Dr. Huggins, Dr. Schertz, and Dr. Stewart Senior.
Autonomous Vehicle: Navigation by a Line Created By: Noam Brown and Amir Meiri Mentor: Johanan Erez and Ronel Veksler Location: Mayer Building (Electrical.
1 5. Video Object Tracking and Processing To achieve augmented reality, computer generated graphics should be shown together with the live video In addition,
Hand Movement Recognition By: Tokman Niv Levenbroun Guy Instructor: Todtfeld Ari.
VOICe 1.5 Enabling Technology - Final Project Gabe Su.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we.
A Vision-Based System that Detects the Act of Smoking a Cigarette Xiaoran Zheng, University of Nevada-Reno, Dept. of Computer Science Dr. Mubarak Shah,
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Department of Electrical and Computer Engineering He Zhou Hui Zheng William Mai Xiang Guo Advisor: Professor Patrick Kelly ASLLENGE Final Project Review.
How Does a Light Sensor Work?. 1. How do humans sense light? 2. Provide an example “stimulus-sensor- coordinator-effector-response” framework using the.
Fast Walking and Modeling Kicks Purpose: Team Robotics Spring 2005 By: Forest Marie.
Conference Room Laser Pointer System Anna Goncharova, Brent Hoover, Alex MendesSponsored by Dr. Jeffrey Black Overview The project concept was developed.
Conference Room Laser Pointer System Preliminary Design Report Anna Goncharova Brent Hoover Alex Mendes.
Design and Implementation of Metallic Waste Collection Robot
Sixth Sense Technology. Already existing five senses Five basic senses – seeing, feeling, smelling, tasting and hearing.
Abstract Design Considerations and Future Plans In this project we focus on integrating sensors into a small electrical vehicle to enable it to navigate.
Digital Images The digital representation of visual information.
Professor : Yih-Ran Sheu Student’s name : Nguyen Van Binh Student ID: MA02B203 Kinect camera 1 Southern Taiwan University Department of Electrical Engineering.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Design and Layout (part one) Elements of Art Multimedia.
Zhengyou Zhang Microsoft Research Digital Object Identifier: /MMUL Publication Year: 2012, Page(s): Professor: Yih-Ran Sheu Student.
Presentation by: K.G.P.Srikanth. CONTENTS  Introduction  Components  Working  Applications.
A Method for Hand Gesture Recognition Jaya Shukla Department of Computer Science Shiv Nadar University Gautam Budh Nagar, India Ashutosh Dwivedi.
EE 492 ENGINEERING PROJECT LIP TRACKING Yusuf Ziya Işık & Ashat Turlibayev Yusuf Ziya Işık & Ashat Turlibayev Advisor: Prof. Dr. Bülent Sankur Advisor:
Submitted by:- Vinay kr. Gupta Computer Sci. & Engg. 4 th year.
Addison Wesley is an imprint of © 2010 Pearson Addison-Wesley. All rights reserved. Chapter 5 Working with Images Starting Out with Games & Graphics in.
Vehicle License Plate Detection Algorithm Based on Statistical Characteristics in HSI Color Model Instructor : Dr. K. R. Rao Presented by: Prasanna Venkatesh.
Remote Sensing and Image Processing: 2 Dr. Hassan J. Eghbali.
MA/CS 3751 Fall 2002 Lecture 24. MA/CS 3752 ginput ginput is a Matlab function which takes one argument input: number of points to select in the image.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Boundary Assertion in Behavior-Based Robotics Stephen Cohorn - Dept. of Math, Physics & Engineering, Tarleton State University Mentor: Dr. Mircea Agapie.
Professor : Tsung Fu Chien Student’s name : Nguyen Trong Tuyen Student ID: MA02B208 An application Kinect camera controls Vehicles by Gesture 1 Southern.
Lecture 7: Intro to Computer Graphics. Remember…… DIGITAL - Digital means discrete. DIGITAL - Digital means discrete. Digital representation is comprised.
EEC 490 GROUP PRESENTATION: KINECT TASK VALIDATION Scott Kruger Nate Dick Pete Hogrefe James Kulon.
` Tracking the Eyes using a Webcam Presented by: Kwesi Ackon Kwesi Ackon Supervisor: Mr. J. Connan.
CONTENT 1. Introduction to Kinect 2. Some Libraries for Kinect 3. Implement 4. Conclusion & Future works 1.
June 14, ‘99 COLORS IN MATLAB.
Ghislain Fouodji Tasse Supervisor: Dr. Karen Bradshaw Computer Science Department Rhodes University 24 March 2009.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Software Narrative Autonomous Targeting Vehicle (ATV) Daniel Barrett Sebastian Hening Sandunmalee Abeyratne Anthony Myers.
Vision & Image Processing for RoboCup KSL League Rami Isachar Lihen Sternfled.
Portable Camera-Based Assistive Text and Product Label Reading From Hand-Held Objects for Blind Persons.
Abstract Neurobiological Based Navigation Map Created During the SLAM Process of a Mobile Robot Peter Zeno Advisors: Prof. Sarosh Patel, Prof. Tarek Sobh.
The entire system was tested in a small swimming pool. The fully constructed submarine is shown in Fig. 14. The only hardware that was not on the submarine.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Coin Recognition Using MATLAB - Emad Zaben - Bakir Hasanein - Mohammed Omar.
MAV Optical Navigation Software System April 30, 2012 Tom Fritz, Pamela Warman, Richard Woodham Jr, Justin Clark, Andre DeRoux Sponsor: Dr. Adrian Lauf.
Self-Navigation Robot Using 360˚ Sensor Array
Southern Taiwan University Department of Electrical Engineering
Binary Notation and Intro to Computer Graphics
Smart Car Robot Prepared by Supervised by Mai Asem Abushamma
Find It VR Project (234329) Students: Yosef Albo, Bar Albo
An Introduction to VEX IQ Programming with Modkit
EE 492 ENGINEERING PROJECT
Presentation transcript:

Object Tracking Using Autonomous Quad Copter Carlos A Munoz, Advisor: Dr. Tarek Sobh Robotics, Intelligent Sensing & Control (RISC) Lab., School of Engineering, University of Bridgeport Abstract This work explains an approach to implement an autonomous ball catching quad copter by combining object detection and trajectory calculations. To accomplish this, a quad copter is used to autonomously catch a ball by dynamically calculating trajectory and the interception point using two Kinects. This approach is tested in a two-step process. First successful test is done on a flat surface (table) and the second test is done in the air. Introduction The goal of this work is to create an autonomous robotic system that can attempt to simulate a human catcher. This is implemented on the quad copter shown in Fgure 1. The idea came from watching kids always playing, throwing the ball to a wall or to the air and having to catch the ball themselves. What if we can use a robot to pick the ball up or hit the ball back automatically based on location of the ball? This can be useful in areas like golf, where “bots” can be sent to pick all the golf balls and return them back to a central location. Or a concept for a bot that acts like a dog and catches the ball in the air. The objective of this work is to implement such an autonomous system. Hardware A. Quad Copter In this work, there are three main hardware modules that are used in conjunction to achieve the end result. These parts are one quad copter made by 3D Robotics and two Kinects made by Microsoft. The quad copter contains four motors with four blades. Two blades spin counter clockwise and two that spin clockwise. There are 4 ESC’s (electronic speed controllers) that govern the speed of the motors. The quad also comes with a power distribution board, a battery (4200mAh) and an APM (auto pilot module) that combine with the GPS one can achieve autonomy via setting a route on a map for the quad to go and come if needed be. There is also a telemetry radio module that is constantly sending information (altitude, speed, location battery level, etc.) from the quad to the pc and another telemetry radio on the PC connected via USB that sends pitch, jaw, and roll alterations to the quad. They are both transmit /receive modules. Methodology Figure 2: Microsoft Kinect. Figure 3: Tracking Algorithm B. Kinect The Kinect cameras (model shown in Figure 2) are made by Microsoft. They contain three main functions. There is a depth stream that is mainly used to give an approximation on how far an object is from the Kinect. There is also a skeleton tracking module that is mainly used to detect humans in front of the Kinect. There is also a regular color stream (RGB) that can be use like a regular camera. The camera’s color stream feature is what it is used to gives us the XYZ-coordinates that are needed to accomplish this work. Fig. 1.Quad copter used in this work The work is broken into four main modules. The first part is to simply grab the latest image from the two Kinects. The second module checks to see if the object that we want to track is in the images that we grabbed. If the objects are not found then we simply discard the images and get new ones. In the third module, we are first checking if the primary object (the ball) is within an initial distance from the quad. If it is, then a few milliseconds later, we take another image and use them both to acquire the expected trajectory. This trajectory is then sent to the quad that will in turn intercept the object. These are the basics of the work as show in Figure 3. The images gathered by both Kinects are taken as raw 30 frames per second color streams. These are then converted to images of type CV. These images are then converted to the HSV color space. The reason that is done is because intensity is easier to use to separate the object from background noise/color. Once the HSV upper limit and lower limit have been obtained, a gray image gets created based on those two values. Now that intensity only exists in the image, we can start looking for shapes (contours). Rectangles are extracted using boundaries for height and width. If rectangles are detected then the object that we are trying to locate has been found and it is shown with the filled rectangle on the screen. This is the approach used over and over per frame to attempt accuracy on the location of the object while moving in the space provided. A. Object detection One thing to note is that the recognition of the object works best if done by intensity rather than color (RGB). Using the HSV values (hue, saturation, and value) the objects can be added to the array of objects that are going to be searched. Each one is added using the visual box in the program. The image that is initially black and white, shows the pictures in real time and by sliding each value, the object becomes more apparent (stays white) while the background fades (goes to black). Image Processing Fig. 4.Green ball added to the program for continuous detection These values are essential and they need to be as accurate as possible since they are used in combination with the threshold value to determine if the object that we are tracking is in the frame that is being grabbed from the Kinect. Most of the code to achieve this is done using emguCV.Net wrapper for the C# language. Fig.6. Six sliders, two for each HSV upper and lower limit values, can track 3 objects at the same time if needed B. Threshold values These values can be tweaked and they are based on the HSV values of the object that is going to be tracked. At the beginning of the program, the user can see a rectangle showing the object being tracked after the values have been stored. This rectangle only shows if the threshold value satisfied the range (set up distance value for the object). C. Coordinates The coordinates of the objects are represented in XYZ values. Two of the values (X and Y) are given by the first Kinect. The third value (Z) is given by the second Kinect (its own X value). For the first test only the x and y value are used since the ball is rolled on the table. For the second test all three values are used. These values are in relation to the size of the ImageBox used in C#. The real values have to be calculated based on the distance of the camera in relation to the ball and the quad copter. These values will vary depending on the length of the surface (for the first test) and the distance of the quad (for the second test). In this work, an autonomous quad copter was implemented to capture a ball. Calculations to create a quad copter behave like a human and intercept an object is just one very small example of what autonomy can do and how it will impact us in the not so distant future. Great things are possible and hopefully this example can benefit and inspire someone to take this work and expand it to next level. More information including in depth explanation, demo videos and pictures please visit: Conclusion Figure 5. Green ball found, red box implies detection