Download presentation
Presentation is loading. Please wait.
Published byBarnard Tyler Modified over 9 years ago
1
Image Processing Edge detection Filtering: Noise suppresion
2
Image filtering Linear filtering = Applying a local function to the image using a sum of weighted neighboring pixels. Such as blurring: 200 100 22 33 11 0.11 = * Input image Kernel Output image
3
= * Image filtering Mean filter Input image f Filter h Output image g
200 100 22 33 11 0.11 = * Input image f Filter h Output image g
4
Image filtering Linear filters can have arbitrary weights.
Typically they sum to 0 or 1, but not always. Weights may be positive or negative. Many filters aren’t linear (median filter.) 1 What does this filter do?
5
Animation of Convolution
To accomplish convolution of the whole image, we just Slide the mask Original Image Mask C4,2 C4,3 C4,4 Image after convolution
6
= * Gaussian filter Compute empirically Filter h Input image f
Output image g
7
Gaussian vs. mean filters
What does real blur look like?
8
Convolution for Noise Elimination
The noise is eliminated but the operation causes loss of sharp edge definition. In other words, the image becomes blurred
9
Convolution with a non-linear mask
Median filtering There are many masks used in Noise Elimination Median Mask is a typical one The principle of Median Mask is to mask some sub-image, use the median of the values of the sub-image as its value in new image J= Rank: 23, 47, 64, 65, 72, 90, 120, 187, 209 I=1 2 3 median Masked Original Image
10
Median Filtering
11
Back to Linear filters: First and second derivatives
= *
12
First and second derivatives
What are these good for? Original First Derivative x Second Derivative x, y
13
Subtracting filters Original Second Derivative Sharpened
14
Combining filters for some -1 1 = * -1 4 = * It’s also true:
15
Combining Gaussian filters
= * ? More blur than either individually (but less than )
16
Separable filters Compute Gaussian in horizontal direction, followed by the vertical direction. Much faster! = * Not all filters are separable. Freeman and Adelson, 1991
17
Sums of rectangular regions
How do we compute the sum of the pixels in the red box? 243 239 240 225 206 185 188 218 211 216 242 110 67 31 34 152 213 208 221 123 58 94 82 132 77 108 215 235 217 115 212 236 247 139 91 209 233 131 222 219 226 196 114 74 214 232 116 150 69 56 52 201 228 223 182 186 184 179 159 93 154 133 129 81 175 252 241 238 230 128 172 138 65 63 234 249 245 237 143 59 78 10 255 248 251 193 55 33 144 253 161 149 109 47 156 190 107 39 102 73 17 7 51 137 23 32 148 168 203 43 27 12 8 26 160 22 19 35 24 After some pre-computation, this can be done in constant time for any box. This “trick” is commonly used for computing Haar wavelets (a fundemental building block of many object recognition approaches.)
18
Sums of rectangular regions
The trick is to compute an “integral image.” Every pixel is the sum of its neighbors to the upper left. Sequentially compute using:
19
Sums of rectangular regions
B C D Solution is found using: A + D – B - C
20
Spatially varying filters
Some filters vary spatially. Durand, 02 Useful for deblurring.
21
* * * Constant blur Same Gaussian kernel everywhere. output input
Slides courtesy of Sylvian Paris
22
* * * Bilateral filter Maintains edges when blurring!
input output * * The kernel shape depends on the image content. Slides courtesy of Sylvian Paris
23
Borders What to do about image borders: black fixed periodic reflected
24
Sampling Larry Zitnick
25
Up-sampling How do we compute the values of pixels at fractional positions?
26
Up-Sampling as a Convolution Application Example
Convolve Zero interlace Original Image ( ) ÷( ) = 13/9 The value of a pixel in the enlarged image is the average of the value of around pixels. The difference between insert 0 and original value of pixels is “smoothed” by convolution
27
Up-Sampling as a Convolution Application Example
28
Up-sampling: General solutions
How do we compute the values of pixels at fractional positions? Bilinear sampling: f (x,y) f (x+1,y) f (x+0.8,y+0.3) f (x + a, y + b) = (1 - a)(1 - b) f (x, y) + a(1 - b) f (x + 1, y) + (1 - a)b f (x,y + 1) + ab f (x + 1, y + 1) f (x,y+1) f (x+1,y+1) Bicubic sampling fits a higher order function using a larger area of support.
29
Up-sampling Nearest neighbor Bilinear Bicubic
30
Down-sampling If you do it incorrectly your images could look like this: Check out Moire patterns on the web.
31
Down-sampling Aliasing can arise when you sample a continuous signal or image occurs when your sampling rate is not high enough to capture the amount of detail in your image Can give you the wrong signal/image—an alias formally, the image contains structure at different scales called “frequencies” in the Fourier domain the sampling rate must be high enough to capture the highest frequency in the image
32
Solution Filter before sampling, i.e. blur the image first. With blur
Without blur 32
33
Edge Detection Larry Zitnick
34
What is an edge? Cole et al. Siggraph 2008, results of 107 humans.
© 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 34
35
Origin of edges Edges are caused by a variety of factors
surface normal discontinuity depth discontinuity surface color discontinuity illumination discontinuity Edges are caused by a variety of factors
36
Illusory contours © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 36
37
Image gradient The gradient of an image:
The gradient points in the direction of most rapid change in intensity
38
Image gradient The gradient direction is given by:
How would you implement this as a filter? The gradient direction is given by: How does this relate to the direction of the edge? The edge strength is given by the gradient magnitude
39
Sobel operator In practice, it is common to use: Magnitude:
-1 1 -2 2 -1 -2 1 2 Magnitude: Orientation: © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 39
40
Sobel operator Original Magnitude Orientation
© 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 40
41
Effects of noise Consider a single row or column of the image
Plotting intensity as a function of position gives a signal Where is the edge?
42
Solution: smooth first
Where is the edge? Look for peaks in
43
Non-maximum suppression
Check if pixel is local maximum along gradient direction requires checking interpolated pixels p and r
44
Effect of (Gaussian kernel spread/size)
Canny with original The choice of depends on desired behavior large detects large scale edges small detects fine features
45
An edge is not a line... How can we detect lines ?
46
Finding lines in an image
Option 1: Search for the line at every possible position/orientation What is the cost of this operation? Option 2: Use a voting scheme: Hough transform
47
Finding lines in an image
y b b0 x m0 m image space Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: given a set of points (x,y), find all (m,b) such that y = mx + b
48
Finding lines in an image
y b A: the solutions of b = -x0m + y0 this is a line in Hough space y0 x0 x m image space Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x0, y0) in the image space map to?
49
Hough transform algorithm
Typically use a different parameterization d is the perpendicular distance from the line to the origin is the angle Why?
50
Hough transform algorithm
Basic Hough transform algorithm Initialize H[d, ]=0 for each edge point I[x,y] in the image for = 0 to 180 H[d, ] += 1 Find the value(s) of (d, ) where H[d, ] is maximum The detected line in the image is given by What’s the running time (measured in # votes)?
51
Hough transform algorithm
Sketch this parameterization on the board 51
52
Hough transform algorithm
Sketch this parameterization on the board 52
53
Extensions compute unique (d, ) based on image gradient at (x,y)
Extension 1: Use the image gradient same for each edge point I[x,y] in the image compute unique (d, ) based on image gradient at (x,y) H[d, ] += 1 What’s the running time measured in votes? Extension 2 give more votes for stronger edges Extension 3 change the sampling of (d, ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape Ask: what are the axes for a circle Hough space? What does a point in the image map to in the Hough space? 53
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.