Download presentation
Presentation is loading. Please wait.
1
Computer Vision Linear filters and edges Marc Pollefeys COMP 256
2
Computer Vision 2 Last class
3
Computer Vision 3 HSV
4
Computer Vision 4 Jan 16/18-Introduction Jan 23/25CamerasRadiometry Jan 30/Feb1Sources & ShadowsColor Feb 6/8Linear filters & edgesTexture Feb 13/15Multi-View GeometryStereo Feb 20/22Optical flowProject proposals Feb27/Mar1Affine SfMProjective SfM Mar 6/8Camera CalibrationSilhouettes and Photoconsistency Mar 13/15Springbreak Mar 20/22SegmentationFitting Mar 27/29Prob. SegmentationProject Update Apr 3/5Tracking Apr 10/12Object Recognition Apr 17/19Range data Apr 24/26Final project Tentative class schedule
5
Computer Vision 5 Project Project proposals – Feb 22 4 minute presentation and 1-2 page proposal Project update – Mar 29 4 minute presentation Final project presentation – Apr 24/26 presentation/demo and short paper-style report One or more students per project –need identifiable contribution (modules, alternative approaches, …)
6
Computer Vision 6 Project proposals Some ideas –Vision-based UI (gesture recognition,…) –Tracking people with PTZ camera(s) (single camera, multi camera collaborative) –3D reconstruction using Shape-from-? –Camera network calibration –Computer vision on GPU –Environment reconstruction with UAVs –… Come talk to me!
7
Computer Vision 7 Linear Filters General process: –Form new image whose pixels are a weighted sum of original pixel values, using the same set of weights at each point. Properties –Output is a linear function of the input –Output is a shift- invariant function of the input (i.e. shift the input image two pixels to the left, the output is shifted two pixels to the left) Example: smoothing by averaging –form the average of pixels in a neighbourhood Example: smoothing with a Gaussian –form a weighted average of pixels in a neighbourhood Example: finding a derivative –form a weighted average of pixels in a neighbourhood
8
Computer Vision 8 Convolution Represent these weights as an image, H H is usually called the kernel Operation is called convolution –it’s associative Result is: Notice wierd order of indices –all examples can be put in this form –it’s a result of the derivation expressing any shift-invariant linear operator as a convolution.
9
Computer Vision 9 c 22 + c 22 f(i,j) f(.) o (i,j) = f(.) c 12 + c 12 f(i-1,j) f(.) c 13 + c 13 f(i-1,j+1) + f(.) c 21 c 21 f(i,j-1) f(.) c 23 + c 23 f(i,j+1) + f(.) c 31 c 31 f(i+1,j-1) f(.) c 32 + c 32 f(i+1,j) f(.) c 33 + c 33 f(i+1,j+1) f(.) c 11 c 11 f(i-1,j-1) Convolution
10
Computer Vision 10 Example: Smoothing by Averaging
11
Computer Vision 11 Smoothing with a Gaussian Smoothing with an average actually doesn’t compare at all well with a defocussed lens –Most obvious difference is that a single point of light viewed in a defocussed lens looks like a fuzzy blob; but the averaging process would give a little square. A Gaussian gives a good model of a fuzzy blob
12
Computer Vision 12 An Isotropic Gaussian The picture shows a smoothing kernel proportional to (which is a reasonable model of a circularly symmetric fuzzy blob)
13
Computer Vision 13 Smoothing with a Gaussian
14
Computer Vision 14 Differentiation and convolution Recall Now this is linear and shift invariant, so must be the result of a convolution. We could approximate this as (which is obviously a convolution; it’s not a very good way to do things, as we shall see)
15
Computer Vision 15 Finite differences
16
Computer Vision 16 Noise Simplest noise model –independent stationary additive Gaussian noise –the noise value at each pixel is given by an independent draw from the same normal probability distribution Issues –this model allows noise values that could be greater than maximum camera output or less than zero –for small standard deviations, this isn’t too much of a problem - it’s a fairly good model –independence may not be justified (e.g. damage to lens) –may not be stationary (e.g. thermal gradients in the ccd)
17
Computer Vision 17 sigma=1
18
Computer Vision 18 sigma=16
19
Computer Vision 19 Finite differences and noise Finite difference filters respond strongly to noise –obvious reason: image noise results in pixels that look very different from their neighbours Generally, the larger the noise the stronger the response What is to be done? –intuitively, most pixels in images look quite a lot like their neighbours –this is true even at an edge; along the edge they’re similar, across the edge they’re not –suggests that smoothing the image should help, by forcing pixels different to their neighbours (=noise pixels?) to look more like neighbours
20
Computer Vision 20 Finite differences responding to noise Increasing noise -> (this is zero mean additive gaussian noise)
21
Computer Vision 21 The response of a linear filter to noise Do only stationary independent additive Gaussian noise with zero mean (non-zero mean is easily dealt with) Mean: –output is a weighted sum of inputs –so we want mean of a weighted sum of zero mean normal random variables –must be zero Variance: –recall variance of a sum of random variables is sum of their variances variance of constant times random variable is constant^2 times variance –then if is noise variance and kernel is K, variance of response is
22
Computer Vision 22 Filter responses are correlated over scales similar to the scale of the filter Filtered noise is sometimes useful –looks like some natural textures, can be used to simulate fire, etc.
23
Computer Vision 23
24
Computer Vision 24
25
Computer Vision 25
26
Computer Vision 26 Smoothing reduces noise Generally expect pixels to “be like” their neighbours –surfaces turn slowly –relatively few reflectance changes Generally expect noise processes to be independent from pixel to pixel Implies that smoothing suppresses noise, for appropriate noise models Scale –the parameter in the symmetric Gaussian –as this parameter goes up, more pixels are involved in the average –and the image gets more blurred –and noise is more effectively suppressed
27
Computer Vision 27 The effects of smoothing Each row shows smoothing with gaussians of different width; each column shows different realisations of an image of gaussian noise.
28
Computer Vision 28 Some other useful filtering techniques Median filter Anisotropic diffusion
29
Computer Vision 29 Median filters : principle non-linear filter method : n 1. rank-order neighbourhood intensities n 2. take middle value no new grey levels emerge...
30
Computer Vision 30 Median filters : odd-man-out advantage of this type of filter is its “odd-man-out” effect e.g. 1,1,1,7,1,1,1,1 ?,1,1,1.1,1,1,?
31
Computer Vision 31 Median filters : example filters have width 5 :
32
Computer Vision 32 Median filters : analysis median completely discards the spike, linear filter always responds to all aspects median filter preserves discontinuities, linear filter produces rounding-off effects DON’T become all too optimistic
33
Computer Vision 33 Median filter : images 3 x 3 median filter : sharpens edges, destroys edge cusps and protrusions
34
Computer Vision 34 Median filters : Gauss revisited Comparison with Gaussian : e.g. upper lip smoother, eye better preserved
35
Computer Vision 35 Example of median 10 times 3 X 3 median patchy effect important details lost (e.g. ear-ring)
36
Computer Vision 36 Gradients and edges Points of sharp change in an image are interesting: –change in reflectance –change in object –change in illumination –noise Sometimes called edge points General strategy –determine image gradient –now mark points where gradient magnitude is particularly large wrt neighbours (ideally, curves of such points).
37
Computer Vision 37 There are three major issues: 1) The gradient magnitude at different scales is different; which should we choose? 2) The gradient magnitude is large along thick trail; how do we identify the significant points? 3) How do we link the relevant points up into curves?
38
Computer Vision 38 Smoothing and Differentiation Issue: noise –smooth before differentiation –two convolutions to smooth, then differentiate? –actually, no - we can use a derivative of Gaussian filter because differentiation is convolution, and convolution is associative
39
Computer Vision 39 The scale of the smoothing filter affects derivative estimates, and also the semantics of the edges recovered. 1 pixel 3 pixels 7 pixels
40
Computer Vision 40 We wish to mark points along the curve where the magnitude is biggest. We can do this by looking for a maximum along a slice normal to the curve (non-maximum suppression). These points should form a curve. There are then two algorithmic issues: at which point is the maximum, and where is the next one?
41
Computer Vision 41 Non-maximum suppression At q, we have a maximum if the value is larger than those at both p and at r. Interpolate to get these values.
42
Computer Vision 42 Predicting the next edge point Assume the marked point is an edge point. Then we construct the tangent to the edge curve (which is normal to the gradient at that point) and use this to predict the next points (here either r or s).
43
Computer Vision 43 Remaining issues Check that maximum value of gradient value is sufficiently large –drop-outs? use hysteresis use a high threshold to start edge curves and a low threshold to continue them.
44
Computer Vision 44 Notice Something nasty is happening at corners Scale affects contrast Edges aren’t bounding contours
45
Computer Vision 45
46
Computer Vision 46 fine scale high threshold
47
Computer Vision 47 coarse scale, high threshold
48
Computer Vision 48 coarse scale low threshold
49
Computer Vision 49 Filters are templates Applying a filter at some point can be seen as taking a dot- product between the image and some vector Filtering the image is a set of dot products Insight –filters look like the effects they are intended to find –filters find effects they look like
50
Computer Vision 50 Normalized correlation Think of filters of a dot product –now measure the angle –i.e normalised correlation output is filter output, divided by root sum of squares of values over which filter lies Tricks: –ensure that filter has a zero response to a constant region (helps reduce response to irrelevant background) –subtract image average when computing the normalizing constant (i.e. subtract the image mean in the neighbourhood) –absolute value deals with contrast reversal
51
Computer Vision 51 Positive responses Zero mean image, -1:1 scaleZero mean image, -max:max scale
52
Computer Vision 52 Positive responses Zero mean image, -1:1 scaleZero mean image, -max:max scale
53
Computer Vision 53 Figure from “Computer Vision for Interactive Computer Graphics,” W.Freeman et al, IEEE Computer Graphics and Applications, 1998 copyright 1998, IEEE
54
Computer Vision 54 The Laplacian of Gaussian Another way to detect an extremal first derivative is to look for a zero second derivative Appropriate 2D analogy is rotation invariant –the Laplacian Bad idea to apply a Laplacian without smoothing –smooth with Gaussian, apply Laplacian –this is the same as filtering with a Laplacian of Gaussian filter Now mark the zero points where there is a sufficiently large derivative, and enough contrast
55
Computer Vision 55 sigma=2 sigma=4 contrast=1 contrast=4 LOG zero crossings
56
Computer Vision 56 We still have unfortunate behaviour at corners
57
Computer Vision 57 Orientation representations The gradient magnitude is affected by illumination changes –but it’s direction isn’t We can describe image patches by the swing of the gradient orientation Important types: –constant window small gradient mags –edge window few large gradient mags in one direction –flow window many large gradient mags in one direction –corner window large gradient mags that swing
58
Computer Vision 58 Representing Windows Types –constant small eigenvalues –Edge one medium, one small –Flow one large, one small –corner two large eigenvalues
59
Computer Vision 59
60
Computer Vision 60
61
Computer Vision 61
62
Computer Vision 62
63
Computer Vision 63 Next class: Pyramids and Texture F&P Chapter 9
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.