EE/CSE 576 HW 1 Notes.

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Spatial Filtering (Chapter 3)
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Regional Processing Convolutional filters. Smoothing  Convolution can be used to achieve a variety of effects depending on the kernel.  Smoothing, or.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 4 Ravi Ramamoorthi
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
Lecture 4 Edge Detection
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Image filtering Hybrid Images, Oliva et al.,
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
1 Image filtering
Lecture 2: Image filtering
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
CSC589 Introduction to Computer Vision Lecture 3 Gaussian Filter, Histogram Equalization Bei Xiao.
Image Processing.  a typical image is simply a 2D array of color or gray values  i.e., a texture  image processing takes as input an image and outputs.
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Image Processing Edge detection Filtering: Noise suppresion.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection From Sandlot ScienceSandlot Science.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Sharpening Spatial Filters ( high pass)  Previously we have looked at smoothing filters which remove fine detail  Sharpening spatial filters seek to.
Lecture 8: Edges and Feature Detection
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Image Processing Intro2CS – week 6 1. Image Processing Many devices now have cameras on them Lots of image data recorded for computers to process. But.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Image Enhancement in the Spatial Domain.
Images and 2D Graphics COMP
Environmental Remote Sensing GEOG 2021
Advanced Computer Graphics
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
CSE 455 HW 1 Notes.
Assignment 2 Creating Panoramas.
Image gradients and edges
Fitting Curve Models to Edges
Assignment 2 Creating Panoramas.
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Image gradients and edges April 11th, 2017
Assignment 2 Creating Panoramas.
Dr. Chang Shu COMP 4900C Winter 2008
Lecture 10 Image sharpening.
9th Lecture - Image Filters
Digital Image Processing
Edge Detection CSE 455 Linda Shapiro.
Linear filtering.
EE/CSE 576 HW 1 Notes.
Assignment 2 Creating Panoramas.
Digital Image Processing Lecture 26: Color Processing
Image Processing 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr.
Image Filtering with GLSL
ECE/CSE 576 HW 1 Notes.
Assignment 2 Creating Panoramas.
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
DIGITAL IMAGE PROCESSING Elective 3 (5th Sem.)
Image Processing 고려대학교 컴퓨터 그래픽스 연구실 kucg.korea.ac.kr.
ECE 596 HW 2 Notes.
Presentation transcript:

EE/CSE 576 HW 1 Notes

Overview Assignment 1 is a big set of exercises to code functions that are basic and many of which are needed for future assignments. Sample functions are provided at the beginning of the code, so you get an idea how to work with the images in Qt. The required functions come from the lectures on filtering, edge finding, and segmentation.

QImage Class in the QT package The Qimage class provides a hardware-independent image representation Some of the useful methods QImage() (and other forms with parameters) copy(int x, int y, int width, int height) const setPixel(int x, int y, uint index_or_rgb) can use function qRgb(int r, int g, int b) width() const, height() const The QRgb class holds a color pixel. from http://doc.qt.io/qt-4.8/qimage.html

Double Arrays We’ve modified the original assignment, which had truncation problems when passing images around. Instead, you will pass around arrays of doubles. The function ConvertQImage2Double() that we provide will convert a Qimage to a 2D matrix. The first dimension handles both columns (c) and rows (r), while the second one specifies the color channel (0, 1, 2). Position (c,r) maps to r*imageWidth + c. This will lead nicely in HW 2, which also uses doubles. You don’t have to convert back to Qimage! You do have to copy any images that you are going to modify.

1. Convolution The first task is to code a general convolution function to be used in most of the others. void Convolution(double **image, double *kernel, int kernelWidth, int kernelHeight, bool add) image is a 2D matrix of class double kernel is a 1D mask array with rows stacked horizontally kernelWidth is the width of the mask kernelHeight is the height of the mask if add is true, then 128 is added to each pixel for the result to get rid of negatives.

Reminder: 2D Gaussian function with standard deviation 

2. Gaussian Blur The second task is to code a Gaussian blur which can be done by calling the Convolution method with the appropriate kernel. void GaussianBlurImage(double **image, double sigma) Let the radius of the kernel be 3 times  The kernel size is then 2 * (radius + 1)

3. Separable Gaussian Blur Now implement a separable Gaussian blur using separate filters for the horizontal blur and then the vertical blur. Call your Convolution function twice. void SeparableGaussianBlurImage(double **image, double sigma) The results should be identical to the 2D Gaussian Blur.

4. First and Second Derivatives of the Gaussian void FirstDerivative_x(double **image, double sigma) takes the image derivative in the x direction using a 1*3 kernel of { -1.0, 0.0, 1.0 } and then does a standard Gaussian blur. void FirstDerivative_y(double **image, double sigma) takes the derivative in the y direction and then does a standard Gaussian blur void SecondDerivImage(double **image, double sigma) computes the Laplacian function and then does a standard Gaussian. For the Laplacian, rather than taking the derivative twice, you may use the 2D kernel: 0.0, 1.0, 0.0 1.0, -4.0, 1.0 All of these add 128 to the final pixel values in order to see negatives. This is done in the call to Convolution().

5. Sharpen Image Sharpen an image by subtracting the Gaussian-smoothed second derivative image from the original. Will need to subtract back off the 128 that second derivative added on. void SharpenImage(double **image, double sigma, double alpha) Sigma as usual and alpha is the constant to multiply the smoothed 2nd derivative image by.

6. Sobel Edge Detector Implement the Sobel operator, produce both the magnitude and orientation of the edges, and display them. void SobelImage(double **image) Use the standard Sobel masks: -1, 0, 1, -2, 0, 2, -1, 0, 1 1, 2, 1, 0, 0, 0 -1, -2, -1

7. Bilinear Interpolation Given an image and a real-valued point (x,y), compute the RGB values for that point through bilinear interpolation, which uses the 4 closest pixel value. void BilinearInterpolationdouble **image, double x, double y, double rgb[3]) Put the red, green, and blue interpolated results in the vector rgb. (x,y)

8. Find Peaks of Edge Responses This function finds the peaks of the edge responses perpendicular to the edges. void FindPeaksImage(double **image, double thres) It first uses Sobel to find the magnitude and orientation at each pixel. Then for each pixel, it compares its edge magnitude to two samples perpendicular to the edge at a distance of one pixel, which requires BilinearInterpolation(). If the pixel edge magnitude is e and these two are e1 and e2, a peak e must be larger than “thres” and larger than or equal to e1 and e2. See next slide.

Example: r=5, c=3, θ=135 degrees sin θ = .7071, cos θ =-.7071 edge thru the pixel e1 pixel (c,r) e2 perpendicular to edge e1x = c + 1 * cos(θ); e1y = r + 1 * sin(θ); e2x = c – 1 * cos(θ); e2y = r – 1 * sin(θ); Example: r=5, c=3, θ=135 degrees sin θ = .7071, cos θ =-.7071 e1 =(2.2929,5.7071) e2 = (3.7071, 4.2929)

9. Color Clustering Perform K-means clustering on a color image first with random seeds and then by selecting seeds from the image itself. void RandomSeedImage(double **image, int num_clusters) void PixelSeedImage(double **image, int num_clusters) Use the RGB color space, and the distance between two pixels with colors (R1,G1,B1) and (R2,G2,B2) is |R1-R2|+|G1-G2|+|B1-B2|. Use epsilon = 30 or max iteration# = 100