EEC-693/793 Applied Computer Vision with Depth Cameras

Slides:



Advertisements
Similar presentations
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Spatial Filtering (Chapter 3)
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
Computer Vision Group Edge Detection Giacomo Boracchi 5/12/2007
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Canny Edge Detector1 1)Smooth image with a Gaussian optimizes the trade-off between noise filtering and edge localization 2)Compute the Gradient magnitude.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Lecture 1: Images and image filtering
Lecture 2: Image filtering
Multimedia Systems & Interfaces Karrie G. Karahalios Spring 2007.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Edge Detection (with implementation on a GPU) And Text Recognition (if time permits) Jared Barnes Chris Jackson.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Vision Geza Kovacs Maslab Colorspaces RGB: red, green, and blue components HSV: hue, saturation, and value Your color-detection code will be more.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Canny Edge Detection Using an NVIDIA GPU and CUDA Alex Wade CAP6938 Final Project.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Canny Edge Detection. 5 STEPS 5 STEPS Apply Gaussian filter to smooth the image in order to remove the noise Apply Gaussian filter to smooth the image.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Noah Snavely Hybrid Images, Oliva et al.,
Filters– Chapter 6. Filter Difference between a Filter and a Point Operation is that a Filter utilizes a neighborhood of pixels from the input image to.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
EEC-693/793 Applied Computer Vision with Depth Cameras
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
Edge Detection CS 678 Spring 2018.
Other Algorithms Follow Up
Computer Vision Lecture 4: Color
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Levi Smith REU Week 1.
EE/CSE 576 HW 1 Notes.
Computer Vision Lecture 16: Texture II
EEC-693/793 Applied Computer Vision with Depth Cameras
Lecture 2: Edge detection
EE/CSE 576 HW 1 Notes.
EEC-492/592 Kinect Application Development
EEC-693/793 Applied Computer Vision with Depth Cameras
EEC-693/793 Applied Computer Vision with Depth Cameras
CSSE463: Image Recognition Day 6
CSSE463: Image Recognition Day 6
CS 565 Computer Vision Nazar Khan Lecture 9.
EEC-693/793 Applied Computer Vision with Depth Cameras
Edge Detection Today’s readings Cipolla and Gee Watt,
Lecture 2: Image filtering
Lecture 1: Images and image filtering
Canny Edge Detector Smooth image with a Gaussian
EEC-693/793 Applied Computer Vision with Depth Cameras
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Winter in Kraków photographed by Marcin Ryczek
IT472 Digital Image Processing
IT472 Digital Image Processing
ECE/CSE 576 HW 1 Notes.
Presentation transcript:

EEC-693/793 Applied Computer Vision with Depth Cameras Lecture 20 Wenbing Zhao wenbing@ieee.org

Outline Image processing with Emgu CV Image Filtering Edge detection

Image Processing vs. Computer Vision Image processing mainly deals with getting different representations of images by transforming them in various ways Computer vision is concerned with extracting information from images so that one can make decisions

Image Filters A filter for filtering out vertical edges: Image filter: a function that takes the original pixel and gives an output that is proportional in some way to the info contained in the original pixel Image filters are 2 dimensional matrix, referred to as a kernel Images are 2 dimensional A filter matrix is a discretized version of a filter function A filter for filtering out vertical edges: Infrared data: 16 bits per pixel at 640x480 at 30 FPS 0 0 0 0 0 -1 -2 6 -2 -1 // Filter kernel for filtering out vertical edges float vertical_fk[5][5] = {{0,0,0,0,0}, {0,0,0,0,0}, {-1,-2,6,-2,-1}, {0,0,0,0,0}, {0,0,0,0,0}}; // Filter kernel for filtering out horizontal edges float horizontal_fk[5][5] = {{0,0,-1,0,0}, {0,0,-2,0,0}, {0,0,6,0,0}, {0,0,-2,0,0}, {0,0,-1,0,0}};

Applying Filters (Convolution) Element-wise multiplication between elements of the kernel and pixels of the image A function is used to calculate a single number using the result of all these element-wise multiplications This function can be sum, average, minimum, maximum, or something very complicated The value thus calculated is known as the “response” of the image to the filter at that iteration The pixel falling below the central element of the kernel assumes the value of the response The kernel is shifted to the right and if necessary down A nice article at: http://www.roborealm.com/help/Convolution.php

Convolving Filter: Example From: http://www.roborealm.com/help/Convolution.php A grayscale image (10x10) 34 22 77 48 237 205 29 212 107 41 50 150 158 233 251 112 165 47 229 93 219 43 56 42 113 140 94 32 19 44 30 36 151 101 28 84 10 90 73 63 148 159 183 99 192 70 27 88 20 230 53 38 106 239 202 196 123 37 174 14 127 100 189 186 214 187 227 86 195 6 168 46 166 249 215 110 125 191 8

Convolving Filter: Example A filter for line detection The row=2, col=2 pixel and its neighborhood Multiply the filter values with the image block: work with each pixel and its 3x3 neighborhood -1 8 34 22 77 50 150 93 -1*34 -1*22 -1*77 -1*50 8*150 -1*93 -1*0

Convolving Filter: Example (-34)+(-22)+(-77)+ (-50)+(1200)+(-77)+ (-93)+(0)+(-77) = 770 Sum of all values: Divide by the divisor and add the bias Divisor and bias might be needed to keep pixel value within 0 to 255 Typically divisor=1, bias=0: final result remains to be 770 If new pixel value is > 255, set it to 255 If new pixel value is < 0, set it to 0 Hence, new pixel value for row=2, col=2 is 255 34 22 77 50 255 93

Convolving Filter: Example Continue with all other 3x3 blocks using original values Next block 22 77 48 150 158 219

An Implementation of 2D Filtering private Image<Bgr, Byte> filterImage(Image<Bgr, Byte> image, double[,] filter, double factor, double bias) { Image<Bgr, Byte> filtered = new Image<Bgr, Byte>(image.Width, image.Height); int w = image.Width; int h = image.Height; int filterWidth = filter.GetLength(0); int filterHeight = filter.GetLength(1); byte[, ,] data = image.Data; byte[, ,] filteredData = filtered.Data; for (int y = 0; y < w; ++y) { for (int x = 0; x < h; ++x) { double red = 0.0, green = 0.0, blue = 0.0; for (int filterY = 0; filterY < filterWidth; filterY++) { for (int filterX = 0; filterX < filterHeight; filterX++)

An Implementation of 2D Filtering { int imageY = (y - filterWidth / 2 + filterX + w) % w; int imageX = (x - filterHeight / 2 + filterY + h) % h; red += data[imageX, imageY, 2] * filter[filterX, filterY]; green += data[imageX, imageY, 1] * filter[filterX, filterY]; blue += data[imageX, imageY, 0] * filter[filterX, filterY]; } byte r = (byte)Math.Min(Math.Max((int)(factor * red + bias), 0), 255); byte g = (byte)Math.Min(Math.Max((int)(factor * green + bias), 0), 255); byte b = (byte)Math.Min(Math.Max((int)(factor * blue + bias), 0), 255); filteredData[x, y, 0] = b; filteredData[x, y, 1] = g; filteredData[x, y, 2] = r; return filtered;

Filtering Results

Blurring Images Blurring an image is the first step to reducing the size of images without changing their appearance too much An image can be thought of as having various “frequency components” along both of its axes’ directions Edges have high frequencies, whereas slowly changing intensity values have low frequencies Vertical edge creates high frequency components along the horizontal axis of the image and vice versa. Finely textured regions have high frequencies (pixel intensity values in it change considerably in short pixel distances)

Blurring Images Reduce the size of an image: remove high-frequency components from it Smooth out those high-magnitude short-interval changes Image Intensity: Sum of the values at a pixel RGB color image Three channels: R, G, B Each color channel’s intensity is the value of the color Low intensity: dark High intensity: whiter In the polling model, the application opens a channel for the stream, and whenever the application needs a frame, it sends a request to get the frame

Blurring Images Blurring can be accomplished by replacing each pixel in the image with some sort of average of the pixels in a region around it To do this efficiently, the region is kept rectangular and symmetric around the pixel, and the image is convolved with a “normalized” kernel Normalized because we want the average, not the sum

Blurring Images: Kernels 1 1 1 1 1 Box kernel Sum: 25 Gaussian kernel Sum: 256 1 4 6 4 1 4 16 24 16 4 6 24 36 24 6

Blurring Images: Results Reuse the same implementation as that for image filtering except we must normalize the calculation Divide by the sum of the kernel used

Edge Detection Edges: points in the image where the gradient of the image is quite high Gradient: change in the value of pixel intensity The gradient of the image is calculated by calculating gradient in X and Y directions and then combining them using Pythagoras’ theorem The angle of gradient can be calculated by taking the arctangent of the ratio of gradients in Y and X directions, respectively Kernels for calculating gradients along x and y direction X direction: y direction: -3 0 3 -10 0 10 -3 -10 -3 0 0 0 3 10 3

Canny Edges The Canny algorithm uses some postprocessing to clean the edge output and gives thin, sharp edges Canny algorithm Remove edge noise by convolving the image with a normalized Gaussian kernel Computer x and y gradients by using kernels: Find overall gradient and gradient angle. Angle is rounded off to 0, 45, 90 and 135 degrees Non-maximum suppression: A pixel is considered to be on an edge only if its gradient magnitude is larger than that at its neighboring pixels in the gradient direction Hysteresis thresholding: use two thresholds A pixel is accepted as an edge if its gradient is higher than the upper threshold, rejected if smaller than lower threshold For in-between pixels, it is accepted as an edge only if it is connected to a pixel that is an edge -1 0 1 -2 0 2 -1 -2 -1 0 0 0 1 2 1

Canny Edge Detector API public Image<TColor, TDepth> Canny(TColor thresh, TColor threshLinking); Defined for Image<> class Parameters: thresh: The threshhold to find initial segments of strong edges threshLinking: The threshold used for edge Linking Returns: The edges found by the Canny edge detector

Corners Detection A corner can be defined as the intersection of two edges A corner can also be defined as a point for which there are two dominant and different edge directions in a local neighborhood of the point

Building ImageFiltering App Create a new C# WPF project named “ImageFiltering” Add references, copy opencv dlls to path, configure project property for x64 bit target Design GUI Two image controls, 2 labels, a textbox, 6 buttons Adding code

Building ImageFiltering App

Building ImageFiltering App Import name spaces using System.Diagnostics; using System.Drawing; using System.Text; using Emgu.CV; using Emgu.CV.Features2D; using Emgu.CV.Structure; using Emgu.Util; using System.Runtime.InteropServices; // for DllImport

Building ImageFiltering App Core method for convolving (portion shown earlier) Takes an Image<> and a filter (kernel) with parameters For blurring, filter must be normalized Returns the filtered image private Image<Bgr, Byte> filterImage(Image<Bgr, Byte> image, double[,] filter, double factor, double bias, bool normalize=false) if (normalize) { // calculate the sum of all elements in the kernel double sum = 0; for (int i = 0; i < filterWidth; i++) { for (int j = 0; j < filterHeight; j++) { sum += filter[i, j]; } } // need to normalize the filter kernel for (int i = 0; i < filterWidth; i++) { filter[i, j] = filter[i, j] / sum; } } }

Building ImageFiltering App Filters used: vertical, horizontal, blur, sharper Always use factor=1.0 and bias=0 // Filter kernel for detecting vertical edges double[,] verticalFilter = new double[,] { { 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0 }, { -1, -2, 6, -2, -1 }, { 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0 } }; // Filter kernel for detecting horizontal edges double[,] filter = new double[,] { { 0, 0, -1, 0, 0 }, { 0, 0, -2, 0, 0 }, { 0, 0, 6, 0, 0 }, { 0, 0, -2, 0, 0 }, { 0, 0, -1, 0, 0 } }; // Filter kernel for blurring (Gaussian kernel) double[,] filter = new double[,] { { 1, 4, 6, 4, 1 }, { 4, 16, 24, 16, 4 }, { 6, 24, 36, 24, 6 }, { 4, 16, 24, 16, 4 }, { 1, 4, 6, 4, 1 } }; // sharper filter double[,] filter = new double[,] { { -1, -1, -1 }, { -1, 9, -1 }, { -1, -1, -1 } };

Building ImageFiltering App Edge detection code //Convert the image to grayscale and filter out the noise Image<Gray, Byte> gray = img.Convert<Gray, Byte>().PyrDown().PyrUp(); // to remove noise Gray cannyThreshold = new Gray(180); Gray cannyThresholdLinking = new Gray(120); Image<Gray, Byte> cannyEdges = gray.Canny(cannyThreshold, cannyThresholdLinking); image2.Source = ToBitmapSource(cannyEdges);

Building ImageFiltering App Browse and load a new image // Create OpenFileDialog Microsoft.Win32.OpenFileDialog dlg = new Microsoft.Win32.OpenFileDialog(); // Display OpenFileDialog by calling ShowDialog method Nullable<bool> result = dlg.ShowDialog(); // Get the selected file name and display in a TextBox if (result == true) { Image<Bgr, Byte> newimg; try { newimg = new Image<Bgr, byte>(dlg.FileName); this.img = newimg; image1.Source = ToBitmapSource(this.img); } catch { MessageBox.Show("Invalide file format"); return; }