Vision processing for robot navigation

Slides:



Advertisements
Similar presentations
1 ECE 495 – Integrated System Design I Introduction to Image Processing ECE 495, Spring 2013.
Advertisements

Embedded Image Processing on FPGA Brian Kinsella Supervised by Dr Fearghal Morgan.
Grey Level Enhancement Contrast stretching Linear mapping Non-linear mapping Efficient implementation of mapping algorithms Design of classes to support.
Photography and CS Philip Chan. Film vs Digital Camera What is the difference?
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
1Ellen L. Walker ImageJ Java image processing tool from NIH Reads / writes a large variety of images Many image processing operations.
Facial feature localization Presented by: Harvest Jang Spring 2002.
Each pixel is 0 or 1, background or foreground Image processing to
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Objective of Computer Vision
Autonomous Vehicle: Navigation by a Line Created By: Noam Brown and Amir Meiri Mentor: Johanan Erez and Ronel Veksler Location: Mayer Building (Electrical.
Objective of Computer Vision
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
STC Robot Optimally Covering an Unknown Indoor Environment Majd Srour, Anis Abboud Under the supervision of: Yotam Elor and Prof. Alfred Bruckstein.
Mobile Robot ApplicationsMobile Robot Applications Textbook: –T. Bräunl Embedded Robotics, Springer 2003 Recommended Reading: 1. J. Jones, A. Flynn: Mobile.
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
Deep Green System for real-time tracking and playing the board game Reversi Nadav Erell Intro to Computational and Biological Vision, CS department, Ben-Gurion.
UNDERSTANDING DYNAMIC BEHAVIOR OF EMBRYONIC STEM CELL MITOSIS Shubham Debnath 1, Bir Bhanu 2 Embryonic stem cells are derived from the inner cell mass.
Image Processing & Perception Sec 9-11 Web Design.
Autonomous Robot Project Lauren Mitchell Ashley Francis.
Department of Computer Science & Engineering Background Subtraction Algorithm for the Intelligent Scarecrow System Francisco Blanquicet, Mentor: Dr. Dmitry.
CS 6825: Binary Image Processing – binary blob metrics
Automated Geometric Centroiding System Matthew Shanker, Eric Harris, David McArthur; Faculty Mentor: Dr. James Palmer; Client: Jim Clark Department of.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
A Multi-Spectral Structured Light 3D Imaging System MATTHEW BEARDMORE MATTHEW BOWEN.
1 Chapter 1: Introduction 1.1 Images and Pictures Human have evolved very precise visual skills: We can identify a face in an instant We can differentiate.
CMUcam for µCHIP (Micro-Controlled High-tech Independent Putter)
1 Regions and Binary Images Hao Jiang Computer Science Department Sept. 25, 2014.
Vision Geza Kovacs Maslab Colorspaces RGB: red, green, and blue components HSV: hue, saturation, and value Your color-detection code will be more.
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Copyright Howie Choset, Renata Melamud, Al Costa, Vincent Lee-Shue, Sean Piper, Ryan de Jonckheere. All Rights Reserved Computer Vision.
1 Regions and Binary Images Hao Jiang Computer Science Department Sept. 24, 2009.
CS 376b Introduction to Computer Vision 02 / 11 / 2008 Instructor: Michael Eckmann.
LAB: Natural Selection Group Members: Problem Statement: What is the effect of environment on the survival of a population?
 THREE TYPES OF CONES  EACH TYPE IS SENSITIVE TO A DIFFERENT WAVELENGTH OF COLOR  REDGREENBLUE  MIXING EQUAL AMMOUNTS OF RED, GREEN, AND BLUE CREATES.
Zack Nemes By: Clemence Larroche. To track and follow a car as it travels along a path.
GRAPHING RELATIONSHIPS For each graph, determine the graphing relationship and record it on a white board.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Counting Stars Copyright © Software Carpentry 2010 This work is licensed under the Creative Commons Attribution License See
Vision-Guided Humanoid Footstep Planning for Dynamic Environments P. Michel, J. Chestnutt, J. Kuffner, T. Kanade Carnegie Mellon University – Robotics.
CLIMATE GRAPHS. Temp In °C Precipitation In mm OTTAWA LABELS! CITY AT TOP TEMPERATURE ON LEFT IN °C PRECIPITATION ON RIGHT MONTHS ACROSS THE BOTTOM.
What you need: In order to use these programs you need a program that sends out OSC messages in TUIO format. There are a few options in programs that.
Image Processing Intro2CS – week 6 1. Image Processing Many devices now have cameras on them Lots of image data recorded for computers to process. But.
Vision & Image Processing for RoboCup KSL League Rami Isachar Lihen Sternfled.
Course : T Computer Vision
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
Image Processing Objectives To understand pixel based image processing
CASTLEFORD CAMERA CLUB
Depth Analysis With Stereo Cameras
Introduction to Electromagnetic waves, Light, and color
Polygons.
Image Processing & Perception
Compression (of this 8-bit 397,000 pixel image):

Absorbance spectroscopy
COMS 161 Introduction to Computing
See Through Fog Imaging Project: P06441
Computer Vision Lecture 5: Binary Image Processing
Other Algorithms Follow Up
Histogram Histogram is a graph that shows frequency of anything. Histograms usually have bars that represent frequency of occuring of data. Histogram has.
Climate Graphs What do they tell us?.
Climate Graphs What do they tell us?.
Multidimensional Arrays and Image Manipulation
Technique 6: General gray-level transformations
Technique 6: General gray-level transformations
© 2010 Cengage Learning Engineering. All Rights Reserved.
Computer and Robot Vision I
Histogram The histogram of an image is a plot of the gray _levels values versus the number of pixels at that value. A histogram appears as a graph with.
Scalable light field coding using weighted binary images
Presentation transcript:

Vision processing for robot navigation Autonomous robot vacuum cleaner 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Introduction Explain the image processing used for robot navigation of Sir-Sux-Alot Image processing overview Attempted Solution Final Solution 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Problem and solution The original problem is to pickup the most rice in the least amount of time. The arena is a simulate room with obstacles. The best solution is to navigate in a systematic route around the simulated room. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Architecture A camera above the vacuum arena will identify the location and barring of the robot. The location and barring are transmitted via UDP to the java navigation engine which tells the robot where to go via serial RF. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Vision Requirements The goal of this vision system is to find the location and bearing of the robot. Robustness. This vision system must be able to work in unknown light conditions. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Minimum Data Required Only two points are necessary. Identify each point. Connect the points to get the bearing. Find the mid-point to get center. Knowing the distance between the points and the relationship to the corners you can calculate the outer dimensions of the robot. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Image Each pixel is represented using three components (red,green and blue): Alpha=a(x,y); Red=r(x,y); Green=g(x,y);Blue=b(x,y); Each pixel is a 32-bit number Each color is 8-bit having a value between 0-255. F=f(x,y)=(a(x,y),r(x,y),g(x,y),b(x,y)) The image is represented as a 2-D array [F1,F2,..Fn] Color representation: Red Green Blue Color representation White 255 Black blue 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Failed Solution 2 Ultra bright LEDs: green & red Background subtraction Hypothesis: Using background subtraction everything but the robot should be black. The LEDs will have a black back ground so only the LEDs should be seen. The centroid of each LED will be found. Then the predominate color of each centroid will be found: red or green. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Background subtraction Capture the first image I1 then subtract it from I2 displaying only the pixels where there is a difference. Then use I2 and subtract it from I3. The general form is In – I(n+1) Problem 1 (variable pixel colors) : Pixels of an image change colors when nothing changes. Solution 1a: Take the max or min of the background image pixels then subtract from the average image. Solution 1a produces it own problems. Any change in the camera position or lighting will invalidate the background causing most of the image to be displayed. (Demo background subtraction) Problem 2 (Halo effect) : The change in the image is such a distortion of the surrounding pixels that the image gets a halo around the change. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Chosen Solution Find object with 3 dots Original web cam image Blend image into background using web cam settings. Using a color histogram threshold the image to Black and white Use a recursive flood fill algorithm to find blobs. The north,south,west and east values of the blobs. Then count the holes in the blobs. The robot will only have 3 holes. Once the holes are identified it is possible to use there relationship to each other to figure out barring. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Color Histogram As mention earlier each pixel’s color is represented by an 8-bit value of red, green and blue. That value is between 0 and 255. The color histograms graph represents: Y axis: the number of times a color is used. X axis: the color represented. The graphs are: All color representations combined Red,green and blue only 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Threshold with Color Histogram The color histogram is used as a threshold to map the image to black and white. You may have already noticed two distinct humps in the graph. The left hump and bigger hump is the background The right hump is everything more rich in color then the background The threshold bars set between the two humps produce the image to the right A dynamic threshold looks for the end of the first hump. This made the vision system resistant to changing light conditions such as flash photography. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Flood Fill What is Flood Fill? Technique to fill the closed area with an interior color to be replaced by fill color. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Flood Fill Algorithm Fast recursive flood-fill Algorithm public void fillFast(int x, int y, int fill) { if ((x < 0) || (x >= raster.width)) return; if ((y < 0) || (y >= raster.height)) return; int old = raster.getPixel(x, y); if (old == fill) return; raster.setPixel(fill, x, y); fillEast(x+1, y, fill, old); fillSouth(x, y+1, fill, old); fillWest(x-1, y, fill, old); fillNorth(x, y-1, fill, old); } private void fillEast(int x, int y, int fill, int old) { // now it only checks for one direction. if (x >= raster.width) return; if (raster.getPixel(x, y) == old) { raster.setPixel(fill, x, y); // only three direction of recursion. fillEast(x+1, y, fill, old); fillSouth(x, y+1, fill, old); fillNorth(x, y-1, fill, old); } } 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com How flood fill was used Once a blob was found it got flood filled. The most north,south,west and east points were recorded during the flood fill. These points are represented with the white dots. The max points were used to calculate the center which is represented by a white dot. The blob was next scanned for holes which were then flood filled to calculate there center points. If a blob had three holes it was identified as the robot. All other blobs were ignored. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com

Nicholas Frank, nicholasfrnk@yahoo.com Acknowledgements Flood-Fill Algorithm, By Junichi Edamitsu http://research.microsoft.com/vision/ http://research.microsoft.com/projects/VisSDK/ P.F. Whelan and D. Molloy (2000), Machine Vision Algorithms in Java: Techniques and Implementation, Springer (London), 298 Pages. ISBN 1-85233-218-2. 11/13/2018 Nicholas Frank, nicholasfrnk@yahoo.com