Instructor: Mircea Nicolescu Lecture 7

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Spatial Filtering (Chapter 3)
Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Midterm Review CS485/685 Computer Vision Prof. Bebis.
1 Image filtering Hybrid Images, Oliva et al.,
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
CS485/685 Computer Vision Dr. George Bebis
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Segmentation (Section 10.2)
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
Lecture 2: Image filtering
CS 376b Introduction to Computer Vision 03 / 04 / 2008 Instructor: Michael Eckmann.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
Edge Detection.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30 – 11:50am.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Edge Detection Today’s reading Cipolla & Gee on edge detection (available online)Cipolla & Gee on edge detection Szeliski, Ch 4.1.2, From Sandlot.
SHINTA P. Juli What are edges in an image? Edge Detection Edge Detection Methods Edge Operators Matlab Program.
EECS 274 Computer Vision Linear Filters and Edges.
Chapter 10, Part I.  Segmentation subdivides an image into its constituent regions or objects.  Image segmentation methods are generally based on two.
CSC508 What You Should Be Doing Code, code, code –Programming Gaussian Convolution Sobel Edge Operator.
Instructor: S. Narasimhan
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 15/16 – TP7 Spatial Filters Miguel Tavares Coimbra.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Edges and Lines Readings: Chapter 10:
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Announcements Project 0 due tomorrow night. Edge Detection Today’s readings Cipolla and Gee (handout) –supplemental: Forsyth, chapter 9Forsyth For Friday.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Sliding Window Filters Longin Jan Latecki October 9, 2002.
1 Edge Operators a kind of filtering that leads to useful features.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
EDGE DETECTION Dr. Amnach Khawne. Basic concept An edge in an image is defined as a position where a significant change in gray-level values occur. An.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
- photometric aspects of image formation gray level images
Linear Filters and Edges Chapters 7 and 8
Edge Detection CS485/685 Computer Vision Dr. George Bebis.
Fitting Curve Models to Edges
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Dr. Chang Shu COMP 4900C Winter 2008
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Lecture 2: Edge detection
Image Segmentation Image analysis: First step:
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
IT472 Digital Image Processing
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Instructor: Mircea Nicolescu Lecture 7 CS 485 / 685 Computer Vision Instructor: Mircea Nicolescu Lecture 7 Good afternoon and thank you everyone for coming. My talk today will describe the research I performed at the IRIS at USC, the object of this work being to build a computational framework that addresses the problem of motion analysis and interpretation.

Second Derivative in 2D: Laplacian The Laplacian:

Second Derivative in 2D: Laplacian The Laplacian can be implemented using the mask:

Variations of Laplacian

Laplacian - Example Example: mask: Example:

Properties of Laplacian It is an isotropic operator. It is cheaper to implement than the gradient (one mask only). It does not provide information about edge direction. It is more sensitive to noise (differentiates twice).

Properties of Laplacian How do we estimate the edge strength? Four cases of zero-crossings : {+,-} {+,0,-} {-,+} {-,0,+} Slope of zero-crossing {a, -b} is |a+b|. To mark an edge: compute slope at zero-crossing apply a threshold to slope

Laplacian of Gaussian (LoG) The Marr-Hildreth edge detector Uses the Laplacian-of-Gaussian (LoG) To reduce the noise effect, the image is first smoothed with a low-pass filter. In the case of the LoG, the low-pass filter is chosen to be a Gaussian. (σ determines the degree of smoothing, mask size increases with σ)

Laplacian of Gaussian (LoG) It can be shown that: (inverted LoG)

Laplacian of Gaussian (LoG) Masks:

Laplacian of Gaussian (LoG) Example

Separability Gaussian: A 2-D Gaussian can be separated into two 1-D Gaussians Perform 2 convolutions with 1-D Gaussians k2 multiplications per pixel 2k multiplications per pixel

Separability Laplacian-of-Gaussian: Requires k2 multiplications per pixel Requires 4k multiplications per pixel

Separability Gaussian Filtering Image g(x) g(y) + Laplacian-of-Gaussian Filtering gyy(y) g(x) Image + gxx(x) g(y)

Separability of LoG Steps:

Laplacian of Gaussian (LoG) Marr-Hildteth (LoG) Algorithm: Compute LoG Use one 2D filter: Use four 1D filters: Find zero-crossings from each row and column Find slope of zero-crossings Apply threshold to slope and mark edges

Gradient vs LoG Gradient vs. LoG – a comparison Gradient works well when the image contains sharp intensity transitions Zero-crossings of LoG offer better localization, especially when the edges are not very sharp step edge ramp edge

Gradient vs LoG Disadvantage of LoG edge detection: Does not handle corners well

Gradient vs LoG Disadvantage of LoG edge detection: Does not handle corners well Why? The derivative of the Gaussian: The Laplacian of the Gaussian: (unoriented)

Difference of Gaussians (DoG) The Difference-of-Gaussians (DoG) Approximates the LoG filter with a filter that is the difference of two differently sized Gaussians – a DoG filter (“Difference of Gaussians”). The image is first smoothed by convolution with a Gaussian kernel of scale 1 A second image is obtained by smoothing with a Gaussian kernel of scale 2

Difference of Gaussians (DoG) Their difference is: The DoG as an operator or convolution kernel is defined as: approximation actual LoG

Difference of Gaussians (DoG) σ = 1 σ = 2 difference

Edge Detection Using Directional Derivative The second directional derivative This is the second derivative computed in the direction of the gradient.

Directional Derivative The partial derivatives of f(x,y) will give the slope ∂f/∂x in the positive x direction and the slope ∂f /∂y in the positive y direction. We can generalize the partial derivatives to calculate the slope in any direction (i.e., directional derivative).

Directional Derivative Directional derivative computes intensity changes in a specified direction. Compute derivative in direction u

Directional Derivative Directional derivative is a linear combination of partial derivatives. (From vector calculus) + =

Directional Derivative ||u||=1 + = cosθ sinθ

Higher Order Directional Derivatives

Edge Detection Using Directional Derivative What direction would you use for edge detection? Direction of gradient:

Edge Detection Using Directional Derivative Second directional derivative along gradient direction:

Properties of Second Directional Derivative

Facet Model Assumes that an image is an array of samples of a continuous function f(x,y). Reconstructs f(x,y) from sampled pixel values. Uses directional derivatives which are computed analytically (without using discrete approximations). z=f(x,y)

Facet Model For complex images, f(x,y) could contain extremely high powers of x and y. Idea: model f(x,y) as a piece-wise function. Approximate each pixel value by fitting a bi-cubic polynomial in a small neighborhood around the pixel (facet).

Facet Model Steps Fit a bi-cubic polynomial to a small neighborhood of each pixel (2) Compute (analytically) directional derivatives in the direction of gradient. (3) Find points where the second derivative is equal to zero (this step provides smoothing too).

Anisotropic Filtering Symmetric Gaussian smoothing tends to blur out edges rather aggressively. An “oriented” smoothing operator (edge-preserving smoothing) would work better: (i) Smooth aggressively perpendicular to the gradient (ii) Smooth little along the gradient Mathematically formulated using diffusion equation.

Anisotropic Filtering – Example result using anisotropic filtering

Effect of Scale Small σ detects fine features. original Small σ detects fine features. Large σ detects large scale edges.

Multi-Scale Processing A formal theory for handling image structures at multiple scales. Determine which structures (e.g., edges) are most significant by considering the range of scales over which they occur.

Multi-Scale Processing σ=1 σ=2 σ=4 σ=8 σ=16 Interesting scales: scales at which important structures are present e.g., in the image above, people can be detected at scales 1-4 39

Scale Space Gaussian filtered signal σ x Detect and plot the zero-crossings of a 1D function over a continuum of scales σ. Instead of treating zero- crossings at a single scale as a single point, we can now treat them at multiple scales as contours. 40

Scale Space Properties of scale space (assuming Gaussian smoothing): Zero-crossings may shift with increasing scale (). Two zero-crossing may merge with increasing scale. A contour may not split in two with increasing scale. 41

Multi-Scale Processing

Multi-Scale Processing

Edge Detection is Just the Beginning… image human segmentation gradient magnitude Berkeley segmentation database: http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/segbench/ 44

Math Review Vectors Matrices SVD Linear systems Geometric transformations

n-Dimensional Vector An n-dimensional vector v is denoted as: Its transpose vT is denoted as:

Vector Normalization Vector normalization  unit length vector Example:

Inner (or Dot) Product Given vT = (x1, x2, . . . , xn) and wT = (y1, y2, . . . , yn) their dot product is defined as: (scalar) or

Defining Magnitude Using Dot Product Magnitude definition: Dot product definition: Therefore:

Geometric Definition of Dot Product θ corresponds to the smaller angle between u and v

Geometric Definition of Dot Product The sign of u.v depends on cos(θ)

Vector (Cross) Product w v  u The cross product is a VECTOR! Magnitude: Orientation:

Vector Product Computation w v  u