Edges and Contours– Chapter 7. Visual perception We don’t need to see all the color detail to recognize the scene content of an image That is, some data.

Slides:



Advertisements
Similar presentations
Boundary Detection - Edges Boundaries of objects –Usually different materials/orientations, intensity changes.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Linear Filtering – Part I Selim Aksoy Department of Computer Engineering Bilkent University
Spatial Filtering (Chapter 3)
Edges and Contours– Chapter 7
Lecture 6 Sharpening Filters
EDGE DETECTION ARCHANA IYER AADHAR AUTHENTICATION.
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Lecture 4 Edge Detection
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
CSCE 641 Computer Graphics: Image Filtering & Feature Detection Jinxiang Chai.
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
Lecture 2: Image filtering
Announcements Since Thursday we’ve been discussing chapters 7 and 8. “matlab can be used off campus by logging into your wam account and bringing up an.
CS 376b Introduction to Computer Vision 03 / 04 / 2008 Instructor: Michael Eckmann.
Computational Photography: Image Processing Jinxiang Chai.
Image Filtering. Problem! Noise is a problem, even in images! Gaussian NoiseSalt and Pepper Noise.
Chapter 3 (cont).  In this section several basic concepts are introduced underlying the use of spatial filters for image processing.  Mainly spatial.
Edge Detection Hao Huy Tran Computer Graphics and Image Processing CIS 581 – Fall 2002 Professor: Dr. Longin Jan Latecki.
Neighborhood Operations
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
CS-424 Gregory Dudek Today’s Lecture Computational Vision –Images –Image formation in brief (+reading) –Image processing: filtering Linear filters Non-linear.
University of Texas at Austin CS384G - Computer Graphics Fall 2010 Don Fussell Image processing.
Introduction to Image Processing Grass Sky Tree ? ? Sharpening Spatial Filters.
Introduction to Image Processing
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
SHINTA P. Juli What are edges in an image? Edge Detection Edge Detection Methods Edge Operators Matlab Program.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
CSE 185 Introduction to Computer Vision Edges. Scale space Reading: Chapter 3 of S.
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Brent M. Dingle, Ph.D Game Design and Development Program Mathematics, Statistics and Computer Science University of Wisconsin - Stout Edge Detection.
Sejong Univ. Edge Detection Introduction Simple Edge Detectors First Order Derivative based Edge Detectors Compass Gradient based Edge Detectors Second.
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 7
Lecture 8: Edges and Feature Detection
Digital Image Processing Week V Thurdsak LEAUHATONG.
Digital Image Processing CSC331
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Image Enhancement in the Spatial Domain.
Miguel Tavares Coimbra
Edge Detection slides taken and adapted from public websites:
Edge Detection Phil Mlsna, Ph.D. Dept. of Electrical Engineering Northern Arizona University.
Image Pre-Processing in the Spatial and Frequent Domain
Digital Image Processing
Fourier Transform: Real-World Images
Edge Detection CS 678 Spring 2018.
Lecture 2: Edge detection
Jeremy Bolton, PhD Assistant Teaching Professor
Computer Vision Lecture 9: Edge Detection II
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Dr. Chang Shu COMP 4900C Winter 2008
Geometric and Point Transforms
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Lecture 2: Edge detection
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Lecture 2: Edge detection
IT472 Digital Image Processing
IT472 Digital Image Processing
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Edges and Contours– Chapter 7

Visual perception We don’t need to see all the color detail to recognize the scene content of an image That is, some data provides critical information for recognition, other data provides information that just makes things look “good”

Visual perception Sometimes we see things that are not really there!!! Kanizsa Triangle (and variants)

Edges Edges (single points) and contours (chains of edges) play a dominant role in (various) biological vision systems –Edges are spatial positions in the image where the intensity changes along some orientation (direction) –The larger the change in intensity, the stronger the edge –Basis of edge detection is the first derivative of the image intensity “function”

First derivative – continuous f(x) Slope of the line at a point tangent to the function

First derivative – discrete f(u) Slope of the line joining two adjacent (to the selected point) point u-1u+1u

Discrete edge detection Formulated as two partial derivatives –Horizontal gradients yield vertical edges –Vertical gradients yield horizontal edges –Upon detection we can learn the magnitude (strength) and orientation of the edge More in a minute…

NOTE In the following images, only the positive magnitude edges are shown This is an artifact of ImageJ Process->Filters->Convolve… command Implemented as an edge operator, the code would have to compensate for this

Detecting edges – sharp image ImageVertical Edges Horizontal Edges

Detecting edges – blurry image ImageVertical Edges Horizontal Edges

The problem… Localized (small neighborhood) detectors are susceptible to noise

The solution Extend the neighborhood covered by the filter –Make the filter 2 dimensional Perform a smoothing step prior to the derivative –Since the operators are linear filters, we can combine the smoothing and derivative operations into a single convolution

Edge operator The following edge operators produce two results –A “magnitude” edge map (image) –An “orientation” edge map (image)

Prewitt operator 3x3 neighborhood Equivalent to averaging followed by derivative –Note that these are convolutions, not matrix multiplications

Prewitt – sharp image

Prewitt – blurry image

Prewitt – noisy image Clearly this is not a good solution…what went wrong? –The smoothing just smeared out the noise How could you fix it? –Perform non-linear noise removal first

Prewitt magnitude and direction

Sobel operator 3x3 neighborhood Equivalent to averaging followed by derivative –Note that these are convolutions, not matrix multiplications –Same as Prewitt but the center row/column is weighted heavier

Sobel – sharp image

Sobel – blurry image

Sobel – noisy image Clearly this is not a good solution…what went wrong? –The smoothing just smeared out the noise How could you fix it? –Perform non-linear noise removal first

Sobel magnitude and direction

Still not good…how could we fix this now? Using the information of the direction (lots of randomly oriented, non-homogeneous directions) can help to eliminate edged due to noise –This is a “higher level” (intelligent) function

Roberts operator Looks for diagonal gradients rather than horizontal/vertical Everything else is similar to Prewitt and Sobel operators

Roberts magnitude and direction

Compass operators An alternative to computing edge orientation as an estimate derived from two oriented filters (horizontal and vertical) Compass operators employ multiple oriented filters To most famous are –Kirsch –Nevatia-Babu

Kirsch Filter Eight 3x3 kernel –Theoretically must perform eight convolutions –Realistically, only compute four convolutions, the other four are merely sign changes The kernel that produces the maximum response is deemed the winner –Choose its magnitude –Choose its direction

Kirsch filter kernels Vertical edges L-R diagonal edges R-L diagonal edges Horizontal edges

Kirsch filter

Nevatia-Babu Filter Twelve 5x5 kernel –Theoretically must perform twelve convolutions –Increments of approximately 30° –Realistically, only compute six convolutions, the other six are merely sign changes The kernel that produces the maximum response is deemed the winner –Choose its magnitude –Choose its direction

Nevatia-Babu filter