Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

EDGE DETECTION.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Computer Vision - A Modern Approach
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
Lecture 3: Edge detection, continued
Segmentation (Section 10.2)
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
Image Features, Hough Transform Image Pyramid CSE399b, Spring 06 Computer Vision Lecture 10
Lecture 2: Image filtering
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
Announcements Send to the TA for the mailing list For problem set 1: turn in written answers to problems 2 and 3. Everything.
CS 376b Introduction to Computer Vision 03 / 04 / 2008 Instructor: Michael Eckmann.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Computational Photography: Image Processing Jinxiang Chai.
Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
CGMB424: IMAGE PROCESSING AND COMPUTER VISION
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Image Processing Edge detection Filtering: Noise suppresion.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection Szeliski, Ch 4.1.2, From Sandlot.
Instructor: S. Narasimhan
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Image Enhancement in the Spatial Domain.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Linear Filters and Edges Chapters 7 and 8
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Interest Points EE/CSE 576 Linda Shapiro.
Linear Filters and Edges Chapters 7 and 8
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Linear Filters and Edges Chapters 7 and 8
Lecture 13 Shape ch. 9, sec. 1-8, of Machine Vision by Wesley E
Edge Detection CS 678 Spring 2018.
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 5 Image Characterization ch. 4 of Machine Vision by Wesley E
Lecture 2: Edge detection
CS 565 Computer Vision Nazar Khan Lecture 9.
Edge Detection Today’s readings Cipolla and Gee Watt,
Lecture 3 Math & Probability Background ch
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Lecture 2: Edge detection
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
Lecture 14 Shape ch. 9, sec. 1-8, of Machine Vision by Wesley E
Lecture 3 Math & Probability Background ch
Winter in Kraków photographed by Marcin Ryczek
(CMU ECE) : (CMU BME) : BioE 2630 (Pitt)
IT472 Digital Image Processing
Templates and Image Pyramids
Presentation transcript:

Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E Lecture 6 Linear Processing ch. 5 of Machine Vision by Wesley E. Snyder & Hairong Qi Note: To print these slides in grayscale (e.g., on a laser printer), first change the Theme Background to “Style 1” (i.e., dark text on white background), and then tell PowerPoint’s print dialog that the “Output” is “Grayscale.” Everything should be clearly legible then. This is how I also generate the .pdf handouts.

Linear Operators D is a linear operator iff: “If and only if” D is a linear operator iff: D( αf1 + βf2 ) = αD( f1 ) + βD( f2 ) Where f1 and f2 are images, and  and  are scalar multipliers Not a linear operator (why?): g = D( f ) = af + b

Kernel Operators Kernel (h) = “small image” Correlated with Often 3x3 or 5x5 Correlated with a “normal” image ( f ) Implied correlation (sum of products) makes a kernel an operator. A linear operator. Note: This use of correlation is often mislabeled as convolution in the literature. Any linear operator applied to an image can be approximated with correlation. h-1,-1 h0,-1 h1,-1 h-1,0 h0,0 h1,0 h-1,1 h0,1 h1,1 f0,0 f1,0 f2,0 f3,0 f4,0 f0,1 f1,1 f2,1 f3,1 f4,1 f0,2 f1,2 f2,2 f3,2 f4,2 f0,3 f1,3 f2,3 f3,3 f4,3 f0,4 f1,4 f2,4 f3,4 f4,4 Flip the kernel in the x and y directions to switch between correlation and convolution

Kernels for Derivatives Task: estimate partial spatial derivatives Solution: numerical approximation [ f (x + 1) - f (x) ]/1 Really Bad choice: not even symmetric [ f (x + 1) - f (x - 1) ]/2 Still a bad choice: very sensitive to noise We need to blur away the noise (only blur orthogonal to the direction of each partial): The Sobel kernel is center-weighted or Correlation (sum of products)

Derivative Estimation #2: Use Function Fitting Think of the image as a surface The gradient then fully specifies the orientation of the tangent planes at every point, and vice-versa. So, fit a plane to the neighborhood around a point Then the plane gives you the gradient The concept of fitting occurs frequently in machine vision. Ex: Gray values Surfaces Lines Curves Etc.

Derivative Estimation: Derive a 3x3 Kernel by Fitting a Plane If you fit by minimizing squared error, and you use symbolic notation to generalize, you get: A headache The kernel that we intuitively guessed earlier: 1 6 -1

Vector Representations of Images Also called lexicographic representations Linearize the image Pixels have a single index (that starts at 0) 0 is the Lexicographic index F0=7 7 4 6 1 3 5 9 8 2 f0,0 f1,0 f2,0 f3,0 f0,1 f1,1 f2,1 f3,1 f0,2 f1,2 f2,2 f3,2 f0,3 f1,3 f2,3 f3,3 F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 Vector listing of pixel values Change of coordinates

Vector Representations of Kernels This is HUGE (N2) Can also linearize a kernel Linearization is unique for each pixel coordinate and for each image size. For pixel coordinate (1,2) (i.e. pixel F9) in our image: Can combine the kernel vectors for each of the pixels into a single lexicographic kernel matrix (H) H is circulant (columns are rotations of one another). Why? H5 H9 H10 F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 h= -3 1 2 -5 4 6 -7 9 8 Ability to convolve like this PROVES convolution is a linear operator BOUNDARY CONDITIONS

Convolution in Lexicographic Representations Convolution becomes matrix multiplication! Great conceptual tool for proving theorems H is almost never computed or written out

Basis Vectors for (Sub)Images Cartesian Basis Vectors Carefully choose a set of basis vectors (image patches) on which to project a sub-image (window) of size (x,y) Is this lexicographic? The basis vectors with the largest coefficients are the most like this sub-image. If we choose meaningful basis vectors, this tells us something about the sub-image Frei-Chen Basis Vectors Cartesian = “useless” Frei-Chen = informative Edge Detection: inner prod. of neigh. vec. w/ edge vec.

Edge Detection (VERY IMPORTANT) Easy to Find Image areas where: Brightness changes suddenly = Some derivative has a large magnitude Often occur at object boundaries! Find by: Estimating partial derivatives with kernels Calculating magnitude and direction from partials Positive step edge Negative step edge Positive roof edge Negative roof edge Positive ramp edges Negative ramp edges Noisy Positive Edge Noisy Negative Edge Noise makes harder to find! Harder To Find

Edge Detection Detected edges are: Too thick in places Diatom image (left) and its gradient magnitude (right). (http://bigwww.epfl.ch/thevenaz/differentials/) Detected edges are: Too thick in places Missing in places Extraneous in places Thresholding? Best we can do w/ convolution Then threshold the gradient magnitude image

Convolving w/ Fourier Sometimes, the fasted way to convolve is to multiply in the frequency domain. Multiplication is fast. Fourier transforms are not. The Fast Fourier Transform (FFT) helps Pratt (Snyder ref. 5.33) figured out the details Complex tradeoff depending on both the size of the kernel and the size of the image *For almost all image sizes For kernels  7x7, normal (spatial domain) convolution is fastest*. For kernels ≥ 13x13, the Fourier method is fastest*.

Image Pyramids Increasing Scale A series of representations of the same image Each is a 2:1 subsampling of the image at the next “lower level. Subsampling = averaging = down sampling The subsampling happens across all dimensions! For a 2D image, 4 pixels in one layer correspond to 1 pixel in the next layer. To make a Gaussian pyramid: Blur with Gaussian Down sample by 2:1 in each dimension Go to step 1 Increasing Scale Low + High Freq. image pair -> reconstruct original

Scale Space Multiple levels like a pyramid Blur like a pyramid But don’t subsample All layers have the same size Instead: Convolve each layer with a Gaussian of variance .  is the “scale parameter” Only large features are visible at high scale (large ).

Quad/Oc Trees Represent an image Homogeneous blocks 10 12 31 2 33 32 Represent an image Homogeneous blocks Inefficient for storage Too much overhead Not stable across small changes But: Useful for representing scale space. 1 2 3 10 11 12 13

Gaussian Scale Space Large scale = only large objects are visible Increasing   coarser representations Scale space causality Increasing   # extrema should not increase Allows you to find “important” edges first at high scale. How features vary with scale tells us something about the image Non-integral steps in scale can be used Useful for representing: Brightness Texture PDF (scale space implements clustering)

How do People Do It? Receptive fields Representable by Gabor functions 2D Gaussian + A plane wave The plane wave tends to propagate along the short axis of the Gaussian But also representable by Difference of offset Gaussians Only 3 extrema Compare to wavelets: represent freq. + location (compare to Fourier) At pos. x, what are the local freq.?

Canny Edge Detector Use kernels to find at every point: Gradient magnitude Gradient direction Perform Nonmaximum suppression (NMS) on the magnitude image This thins edges that are too thick Only preserve gradient magnitudes that are maximum compared to their 2 neighbors in the direction of the gradient

Canny Edge Detector, contd. Edges are now properly located and 1 pixel wide But noise leads to false edges, and noise+blur lead to missing edges. Help this with 2 thresholds A high threshold does not get many false edges, and a low threshold does not miss many edges. Do a “flood fill” on the low threshold result, seeded by the high-threshold result Only flood fill along isophotes

Reminders Quiz 4 next class HW2 due Wednesday 5pm