Vision Geza Kovacs Maslab 2011. Colorspaces RGB: red, green, and blue components HSV: hue, saturation, and value Your color-detection code will be more.

Slides:



Advertisements
Similar presentations
Convolution. Why? Image processing Remove noise from images (e.g. poor transmission (from space), measurement (X-Rays))
Advertisements

Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Color Image Processing
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Computer Vision Introduction to Image formats, reading and writing images, and image environments Image filtering.
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Vision January 10, Today's Agenda ● Some general notes on vision ● Colorspaces ● Numbers and Java ● Feature detection ● Rigid body motion.
Exercise 1 - Poisson Image completion using Discrete Poisson Linear Systems.
Images and colour Colour - colours - colour spaces - colour models Raster data - image representations - single and multi-band (multi-channel) images -
Redaction: redaction: PANAKOS ANDREAS. An Interactive Tool for Color Segmentation. An Interactive Tool for Color Segmentation. What is color segmentation?
Feature extraction Feature extraction involves finding features of the segmented image. Usually performed on a binary image produced from.
Brief overview of ideas In this introductory lecture I will show short explanations of basic image processing methods In next lectures we will go into.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Image Processing.  a typical image is simply a 2D array of color or gray values  i.e., a texture  image processing takes as input an image and outputs.
Chapter 3 Binary Image Analysis. Types of images ► Digital image = I[r][c] is discrete for I, r, and c.  B[r][c] = binary image - range of I is in {0,1}
by Utku Tatlıdede Kemal Kaplan
CS1315: Introduction to Media Computation Picture encoding and manipulation.
CS1315: Introduction to Media Computation Picture encoding and manipulation.
Edge Detection (with implementation on a GPU) And Text Recognition (if time permits) Jared Barnes Chris Jackson.
CS 6825: Binary Image Processing – binary blob metrics
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Digital Image Processing CCS331 Relationships of Pixel 1.
Digital Image Processing (DIP) Lecture # 5 Dr. Abdul Basit Siddiqui Assistant Professor-FURC 1FURC-BCSE7.
Why is computer vision difficult?
CS1315: Introduction to Media Computation Picture encoding and manipulation.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Copyright Howie Choset, Renata Melamud, Al Costa, Vincent Lee-Shue, Sean Piper, Ryan de Jonckheere. All Rights Reserved Computer Vision.
1 Regions and Binary Images Hao Jiang Computer Science Department Sept. 24, 2009.
Autonomous Robots Vision © Manfred Huber 2014.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
COMPUTER VISION Larry Wolff MTW Office: 212NEB Office Hours: Wed. 1-2PM.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Canny Edge Detection Using an NVIDIA GPU and CUDA Alex Wade CAP6938 Final Project.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Relationship between pixels Neighbors of a pixel – 4-neighbors (N,S,W,E pixels) == N 4 (p). A pixel p at coordinates (x,y) has four horizontal and vertical.
Digital Image Processing CCS331 Relationships of Pixel 1.
Course : T Computer Vision
Image Processing Objectives To understand pixel based image processing
Color Image Processing
Color Image Processing
CSE 554 Lecture 1: Binary Pictures
Bitmap Image Vectorization using Potrace Algorithm
CSE 455 HW 1 Notes.
Color Image Processing
Mean Shift Segmentation
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 4: Color
Image Processing - in short
Computer Vision Lecture 9: Edge Detection II
EEC-693/793 Applied Computer Vision with Depth Cameras
EE/CSE 576 HW 1 Notes.
Color Image Processing
EEC-693/793 Applied Computer Vision with Depth Cameras
9th Lecture - Image Filters
EE/CSE 576 HW 1 Notes.
Color Image Processing
EEC-693/793 Applied Computer Vision with Depth Cameras
CSSE463: Image Recognition Day 2
Primitive Drawing Algorithm
Lecture 2: Image filtering
Primitive Drawing Algorithm
CSSE463: Image Recognition Day 2
ECE/CSE 576 HW 1 Notes.
Image segmentation Grey scale image Binary image
Presentation transcript:

Vision Geza Kovacs Maslab 2011

Colorspaces RGB: red, green, and blue components HSV: hue, saturation, and value Your color-detection code will be more resilient to lighting conditions if you use HSV RGB: 212, 45, 45 HSV: 0, 201, 212 RGB: 102, 22, 22 HSV: 0, 201, 102 RGB: 102, 0, 0 HSV: 0, 255, 102 RGB: 212, 45, 45 HSV: 0, 201, 212 RGB: 102, 0,0 HSV: 0, 255, 102 RGB: 255, 105,105 HSV: 0, 105, 255

Colorspaces Note that because the hue in HSV wraps around, red is both h=255 and h=0 See Tutorial for more info on HSV hue=255 hue=0

Representation of Color in Bytes BufferedImage img = …; img.getRGB(x, y) returns a 32-bit (4-byte) integer 0x00FF0000

Representation of Color in Bytes 0x00FF0000 Alpha channel: basically transparency, not of interest

Representation of Color in Bytes 0x00FF0000 Red Red: 0xFF=255

Representation of Color in Bytes 0x00FF0000 Green Red: 0xFF=255 Green: 0x00=0

Representation of Color in Bytes 0x00FF0000 Blue Red: 0xFF=255 Green: 0x00=0 Blue: 0x00=0

Representation of Color in Bytes 0x00FF0000 R: 0xFF=255 G: 0x00=0 B: 0x00=0 R: 0x00=0 G: 0xFF=255 B: 0x00=0 0x0000FF00 0x000000FF R: 0x00=0 G: 0x00=0 B: 0xFF=255 0x00C20E9F R: 0xC2=194 G: 0x0E=14 B: 0x9F=159

Extracting out R, G, B components BufferedImage img = …; int rgb = img.getRGB(x,y); int r = (rgb & 0x00FF0000) >> 16; int g = (rgb & 0x0000FF00) >> 8; int b = (rgb & 0x000000FF); rgb = b + (g << 8) + (r << 16) Also works if the image is in HSV format; just replace r with h, g with s, and b with v See Vision tutorial for more info

By using color thresholds, (checking that hue is in a certain range), can classify pixels as being Red, Yellow, or other

As a first step in detecting balls and goals, we want to group connected yellow pixels, and connected red pixels into “blobs of interest” – That is, label each pixel with a number indicating the connected component it belongs to Connected Component Labeling 1

Various efficient algorithms exist for finding connected components of white pixels in binary images We can use these algorithms if we consider colors one at a time Connected Component Labeling

2-pass algorithm for Connected Component Labeling on Binary Images Pass 1: If all 4 neighbors are black or unlabeled, assign a new label to current point If only one neighbor is white, assign its label to current point If more than one of the neighbors are white, assign one of their labels to current point, and note equivalence of their labels Pass 2: Merge labels which were marked as equivalent in the first pass

Pass 1: If all 4 neighbors are black or unlabeled, assign a new label to current point If only one neighbor is white, assign its label to current point If more than one of the neighbors are white, assign one of their labels to current point, and note equivalence of their labels No labeled white neighbors -> Create label 1 No labeled white neighbors -> Create label 2 No labeled white neighbors -> Create label 4 Current point is not white -> Do nothing

Pass 1: If all 4 neighbors are black or unlabeled, assign a new label to current point If only one neighbor is white, assign its label to current point If more than one of the neighbors are white, assign one of their labels to current point, and note equivalence of their labels 1 white neighbor with label 1 -> Assign 1 2 white labeled neighbors: 1, 2 -> Mark 1=2, assign 1 1 white neighbor with label 1 -> Assign 1 2 white labeled neighbors: 1, 4 -> Mark 1=4, assign 1

At the end of the first pass, we have marked labels 1, 2, and 4 as equivalent, and have marked labels 3 and 5 as equivalent In the second pass, we replace all 2s and 4s with 1s, and replace all 5s with 3s

Colors aren’t always enough for segmenting objects Note that at edges of objects, there is a change in pixel value Use edge detection for segmenting objects

Image Convolution Determines pixel value based on neighboring values (relation described by a kernel matrix) Used in blurring, sharpening, edge detection, etc 01/60 1/31/6 0 0 Kernel Source Image (single-chanel, greyscale) Result of convolution 1/3 + 4/6 + 7/6 + 3/6 + 5/6 = 21/6 3/3 + 13/6 + 1/6 + 9/6 + 10/6 = 39/6 5/3 + 1/6 + 0/6 + 10/6 + 15/6 = 36/6 10/3 + 3/6 + 5/6 + 6/6 + 12/6 = 46/6 Various workarounds for determining the edge pixels

01/60 1/31/6 0 0 Kernel Result of convolution (slightly blurred image) import java.awt.image.*; BufferedImage simpleBlur(BufferedImage src) { float[] matrix = new float[] { 0.0f, 1.0f/6, 0.0f, 1.0f/6, 1.0f/3, 1.0f/6, 0.0f, 1.0f/6, 0.0f, }; Kernel kernel = new Kernel(3, 3, matrix); return new ConvolveOp(kernel).filter(src, null); } Source Image

01/60 1/31/6 0 0 Kernel Result of convolution applied 20 times (more blurred image) import java.awt.image.*; BufferedImage simpleBlur(BufferedImage src) { float[] matrix = new float[] { 0.0f, 1.0f/6, 0.0f, 1.0f/6, 1.0f/3, 1.0f/6, 0.0f, 1.0f/6, 0.0f, }; Kernel kernel = new Kernel(3, 3, matrix); return new ConvolveOp(kernel).filter(src, null); } Source Image

Gaussian Blur A common preprocessing step before various operations (edge detection, color classification before doing connected component labeling, etc) 2/1594/1595/1594/1592/159 4/1599/15912/1599/1594/159 5/15912/15915/15912/1595/159 4/1599/15912/1599/1594/159 2/1594/1595/1594/1592/159 Kernel Source Image Result of convolution with gaussian kernel

Detecting Horizontal Edges Use the Sobel operator G_x Kernel Source Image Result of convolution with sobel operator G_x

Detecting Vertical Edges Use the Sobel operator G_y Kernel Source Image Result of convolution with sobel operator G_y

Edge Detection Get matrices representing horizontal edges (G_x) and vertical edges (G_y) (via convolution with sobel operator) Then, assign each pixel squareroot(value in the horizontal edge squared + value in vertical edge squared) Use more elaborate preprocessing and postprocessing to get nicer results

Segmenting Objects Group same-colored regions using connected component labeling, and use edges to segment objects further? – Fit lines or curves to edges? 1 2 3

Classifying Objects as Goals or Balls Consider the shape: rounded boundary, vs straight boundary Consider the region around the center: red/yellow or not? Consider special cases: goals with balls in the middle, goals observed at an angle, etc

Estimating Distances to Objects Note that all balls have the same size. Likewise with goals, wall heights, etc By making some measurements and using some trig, you can estimate distance to objects from your image data Observed Distance ~= k/(width in pixels) Actual Distance

Testing Advice Keep a collection of images which you can use unit tests on Test detection of balls and goals from different angles, and arrange in various ways Make sure to test your vision code (especially color detection) in different lighting conditions

Connected Component Labeling: Edge Detection: Various lectures from previous years also have info on camera details, performance optimizations, stereo vision, rigid body motion, etc Other Resources