Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.

Slides:



Advertisements
Similar presentations
3-D Computer Vision CSc83020 / Ioannis Stamos  Revisit filtering (Gaussian and Median)  Introduction to edge detection 3-D Computater Vision CSc
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Lecture 2: Convolution and edge detection CS4670: Computer Vision Noah Snavely From Sandlot ScienceSandlot Science.
Spatial Filtering (Chapter 3)
Instructor: Mircea Nicolescu Lecture 6 CS 485 / 685 Computer Vision.
DREAM PLAN IDEA IMPLEMENTATION Introduction to Image Processing Dr. Kourosh Kiani
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge Detection CSE P 576 Larry Zitnick
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 4 Edge Detection
6/9/2015Digital Image Processing1. 2 Example Histogram.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Image processing. Image operations Operations on an image –Linear filtering –Non-linear filtering –Transformations –Noise removal –Segmentation.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
MSU CSE 803 Stockman Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
CS485/685 Computer Vision Dr. George Bebis
Segmentation (Section 10.2)
Lecture 2: Image filtering
Linear filtering.
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
CS4670: Computer Vision Kavita Bala Lecture 7: Harris Corner Detection.
MSU CSE 803 Linear Operations Using Masks Masks are patterns used to define the weights used in averaging the neighbors of a pixel to compute some result.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Neighborhood Operations
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
CS559: Computer Graphics Lecture 3: Digital Image Representation Li Zhang Spring 2008.
Spatial Filtering: Basics
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
EE663 Image Processing Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
Image Processing Edge detection Filtering: Noise suppresion.
AdeptSight Image Processing Tools Lee Haney January 21, 2010.
Linear filtering. Motivation: Noise reduction Given a camera and a still scene, how can you reduce noise? Take lots of images and average them! What’s.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Many slides from Steve Seitz and Larry Zitnick
Course 2 Image Filtering. Image filtering is often required prior any other vision processes to remove image noise, overcome image corruption and change.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Last Lecture photomatix.com. Today Image Processing: from basic concepts to latest techniques Filtering Edge detection Re-sampling and aliasing Image.
Digital Image Processing CSC331
Image Features (I) Dr. Chang Shu COMP 4900C Winter 2008.
Edges Edges = jumps in brightness/color Brightness jumps marked in white.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Arithmetic and Geometric Transformations (Chapter 2) CS474/674 – Prof. Bebis.
Miguel Tavares Coimbra
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
Edge Detection CS485/685 Computer Vision Dr. George Bebis.
Edge Detection CS 678 Spring 2018.
Jeremy Bolton, PhD Assistant Teaching Professor
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Computer Vision Lecture 16: Texture II
Lecture 2: Edge detection
Image Segmentation Image analysis: First step:
Linear Operations Using Masks
Edge Detection Today’s readings Cipolla and Gee Watt,
Winter in Kraków photographed by Marcin Ryczek
IT472 Digital Image Processing
Presentation transcript:

Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision

2 Image Noise Additive random noise: I(x,y) : the true pixel values n(x,y) : the (random) noise at pixel (x,y) 

3 Image Noise Gaussian Noise: −Is used to model additive random noise −Probability of n(x,y) is:

4 Image Noise Salt-and-Pepper Noise: −Is used to model impulsive noise Impulsive Noise −Alters random pixels −Makes their values very different from the true ones p, r are uniformly distributed random variables l, s min, s max are constants

Smoothing Using Averaging Idea: replace each pixel by the average of its neighbors Useful for reducing random additive noise and unimportant details The size of the mask controls the amount of smoothing 5

Smoothing Using Averaging Trade-off: less noise vs blurring and loss of detail 3x35x57x7 15x1525x25 original 6

Gaussian Smoothing Idea: replace each pixel by a weighted average of its neighbors Mask weights are computed by sampling a Gaussian function Note: weight values decrease with distance from mask center 7

Gaussian Smoothing σ = 3  determines the degree of smoothing As  increases, the size of the mask must also increase if we are to sample the Gaussian satisfactorily Usual rule – choose: height = width = 5  (subtends 98.76% of the area) 8

Gaussian Smoothing - Example = 30= 1 = 5 = 10 9

Averaging vs Gaussian Smoothing Averaging Gaussian 10

Properties of Gaussian Convolution of two Gaussians is another Gaussian Special case: convolving two times with Gaussian kernel of width is equivalent to convolving once with kernel of width * = 11

Properties of Gaussian Separable kernel: 12 a 2D Gaussian can be expressed as the product of two 1D Gaussians

Properties of Gaussian 2D Gaussian convolution can be implemented more efficiently using 1D convolutions: 13

row get a new image I r Convolve each column of I r with g Properties of Gaussian 14

Example 2D convolution (center location only) The filter factors into a product of 1D filters: Perform convolution along rows: Followed by convolution along the remaining column: * * = = O(n 2 ) O(2n)=O(n) 15

16 Median Filter Effective for removing "salt-and-pepper" noise (random occurrences of black or white pixels)

17 Median Filter Replace each pixel value by the median of the gray-levels in the neighborhood of the pixels

Image Sharpening Idea: compute intensity differences in local image regions Useful for emphasizing transitions in intensity (e.g., in edge detection) 2 nd derivative of Gaussian 18

Example 19

20 Geometric Transformations Modify the arrangement of pixels based on some geometric transformation.

21 Geometric Transformations Translation r’ = r + t r c’ = c + t c Scaling r’ = s r r c’ = s c c Rotation r’ = r 0 + (r - r 0 )cos(  ) - (c - c 0 )sin(  ) c’ = c 0 + (r - r 0 )sin(  ) + (c - c 0 )sin(  ) Affine Transformation r’ = a 11 r + a 12 c + b 1 c’ = a 21 r + a 22 c + b 2

22 Geometric Transformations Some Practical Problems 1)Transformed pixel coordinates might not lie within the bounds of the image. 2)Transformed pixel coordinates can be non-integer. 3)There might be no pixels in the input image that map to certain pixel locations in the transformed image (one-to-one correspondence can be lost).

23 Geometric Transformations Problem (3): Forward vs. Inverse Mapping −To guarantee that a value is generated for every pixel in the output image, we must consider each output pixel in turn and use the inverse mapping to determine the position in the input image.

24 Geometric Transformations Problem (2): Image Interpolation −Interpolation is the process of generating integer coordinates for a transformed pixel by examining its surrounding pixels. Zero-order interpolation (or nearest-neighbor)

25 Geometric Transformations First-order interpolation Higher-order interpolation schemes are more sophisticated but also more time consuming

26 Edge Detection Definition of edges −Edges are significant local changes of intensity in an image. −Edges typically occur on the boundary between two different regions in an image.

What Causes Intensity Changes? Geometric events −surface orientation (boundary) discontinuities −depth discontinuities −color and texture discontinuities Non-geometric events −illumination changes −specularities −shadows −inter-reflections depth discontinuity color discontinuity illumination discontinuity surface normal discontinuity 27

Goal of Edge Detection Produce a line drawing of a scene from an image of that scene. Important features can be extracted from the edges of an image (e.g., corners, lines, curves). These features are used by higher-level computer vision algorithms (e.g., segmentation, recognition). 28

Effect of Illumination 29

30 Edge Descriptors Edge descriptors −Edge normal: unit vector in the direction of maximum intensity change. −Edge direction: unit vector perpendicular to the edge normal. −Edge position or center: the image position at which the edge is located. −Edge strength: related to the local image contrast along the normal.

31 Modeling Intensity Changes Edges can be modeled according to their intensity profiles: Step edge: −the image intensity abruptly changes from one value to one side of the discontinuity to a different value on the opposite side.

32 Modeling Intensity Changes Ramp edge: −a step edge where the intensity change is not instantaneous but occurs over a finite distance.

33 Modeling Intensity Changes Ridge edge: −the image intensity abruptly changes value but then returns to the starting value within some short distance −generated usually by lines

34 Modeling Intensity Changes Roof edge: −a ridge edge where the intensity change is not instantaneous but occurs over a finite distance −generated usually by the intersection of surfaces

35 Main Steps in Edge Detection The four steps of edge detection: −Smoothing: suppress as much noise as possible, without destroying the true edges. −Enhancement: apply a filter that responds to edges in the image −Detection/thresholding: determine which edge pixels should be discarded as noise and which should be retained (usually, thresholding provides the criterion used for detection). −Localization: determine the exact location of an edge (sub-pixel resolution might be required for some applications, to estimate the location of an edge to better than the spacing between pixels). Edge thinning and linking are usually required in this step.