Download presentation
Presentation is loading. Please wait.
Published byHugo Miles Modified over 9 years ago
1
CS 556 – Computer Vision Image Basics & Review
2
What is an Image? Image: a representation, resemblance, or likeness An image is a signal: a function carrying information Thus, an image is a function, f, from 2 to : f (x, y) gives the amount of some value at position (x, y) In practice, an image is only defined over a finite rectangular domain and with a finite range: f : [a, b] [c, d] [v min, v max ]
3
Images as Functions
4
An image is a signal: a function carrying information Functions have domains and ranges: Domain: (t) (x,y) (x,y,t) (x,y,z) (x,y,z,t) Range: sound (air pressure) graylevel (light intensity) color (RGB, HSL) LANDSAT (7 bands)
5
Image as Functions A color image is “vector-valued” function created by pasting three functions together: Spatial image: function of two or three spatial dimensions f(x, y): images (grayscale, color, multi-spectral) f(x, y, z): medical scans or image volumes (CT, MRI) Spatio-temporal image: 2/3-D space, 1-D time f(x, y, t): videos, movies, animations
6
May be quantities we cannot sense: What do the Range Values Mean? May be visible light: Radio waves (e.g., doppler radar) Magnetic resonance Range images Ultrasound X-rays (e.g., CT) Intensity (gray-level) What intensity does the value “213” represent? Color (RGB)
7
Digital Images: Domains & Ranges The real world is analog: continuous domain and range Computers operate on digital (discrete) data Converting from continuous to discrete: Domains: selection of discrete points is called sampling Ranges: selection of discrete values is called quantization Domain Sampling Range Quantization
8
Digital Image Formation To create a digital image: Sample the 2-D space on a regular grid Quantize each sample (round to nearest integer) If the samples are apart, we can write this as: f [i, j] = Quantize{f (i , j )} The image can now be represented as a matrix of integer values 62792311912010540 10 9621278340 105819746 0048 176135518819168049 211392637077 08914414718710262208 2552520166123620208 1666312717109930 i j
9
Resolution Ability to discern detail – both domain & range Not simply the number of samples/pixels Determined by the averaging or spreading of information when sampled or reconstructed
10
Apertures Point measurements are impossible Have to make measurements using a (weighted) average over some aperture: Time window Spatial area Etc. Size determines resolution: Smaller better resolution Larger worse resolution
11
Apertures Lenses allow physically larger aperture with effectively smaller one Sensor Lens Effective Aperture Physical Aperture
12
Image Transformations An image processing operation typically defines a new image g in terms of an existing image f We can transform either the domain or the range of f Range transformation (a.k.a level operations) : g(x, y) = t( f (x, y)) What’s kinds of operations can this perform?
13
Image Transformations Some operations preserve the range but change the domain of f (a.k.a geometric operations) : g(x, y) = f (t x (x, y), t y (x, y)) What kinds of operations can this perform? Many image transforms operate both on the domain and the range
14
Linear Transforms A general and very useful class of transforms are linear transforms Properties of linear transforms: Multiplying input f(x) by a constant value multiplies the output by the same constant: t(a f(x)) = a t( f(x)) Adding two inputs causes corresponding outputs to add: t( f(x) + h(x)) = t( f(x)) + t(h(x)) Linearity: the transform t is linear iff t(a f(x) + b h(x)) = a t( f(x)) + b t(h(x))
15
Linear Transforms A linear transforms of a discrete signal/image f can be defined by a matrix M using matrix multiplication f [i]f [i]M [i, j]g[i]g[i] Note that matrix and vector indices start at 0 instead of 1 Does M(a f + b h) = a M f + b M h?
16
Linear Transforms: Examples Let’s start with a discrete 1-D image (a “signal”) : f [x] f [x] x 4 4 0 0 2 4 6 6
17
Linear Transforms: Examples Identity transform: fMg 4 4 0 0 2 4 6 6 M = I M f = I f = f
18
Linear Transforms: Examples Scale: fMg 4a4a 4a4a 0 0 2a2a 4a4a 6a6a 6a6a M = a I M f = a I f = a f
19
Linear Transforms: Examples Shift (translate) : fMg 0 0 4 4 0 0 2 4 f [x]g[x]g[x]
20
Linear Transforms: Examples Derivative (finite difference) : fMg 0 – 4 0 2 2 2 0 0 f [x]g[x]g[x]
21
Linear Transforms: Examples The transformation matrix doesn’t have to be square fMg 4 0 3 6 f [x]g[x]g[x]
22
Fourier Transform One important linear transform is the Fourier transform Basic idea: any function can be written as the sum of (complex-valued) sinusoids of different frequencies Euler’s equation: e i2 sx = cos(2 sx) + i sin(2 sx) Note: i is the imaginary number To get the weights (amount of each frequency) :
23
Fourier Transform In matrix form: where The frequency increases with the row number
24
Linear Shift-Invariant Transform A special class of linear transforms are shift invariant Shift invariance: an operation is invariant to translation Implication: shifting the input produces the same output with an equal shift if g(x) = t( f(x)) then t( f(x + x 0 ) = g(x + x 0 )
25
Filters Filter: linear, shift-invariant transform Often applied to operations that are not technically filters (e.g., median “filter”) Transformation matrix M: Shifted copy of some pattern applied to each row Pattern is (usually) centered on (or near) the diagonal Pattern is called a filter, kernel, or mask and is represented by a vector h M h[x] = [a b c]
26
Filters Filter operations can be written (for a kernel size of 2k + 1) as: Assumes negative kernel indices... Actual implementation may need to use h[j + k] instead of h[j] Can think of it as a dot (or inner) product of h with a portion of f Since 2k + 1 is often much less than n, this computation is more efficient (it ignores summing terms that are multiplied with 0)
27
Cross-Correlation & Convolution Filtering operations come in two (very similar) types: Cross-correlation (already seen) : Convolution: Convolution is cross-correlation where either the kernel or signal is flipped first How do the results differ for cross-correlation and convolution if the kernel is symmetric? Anti-symmetric?
28
2-D Linear Transforms A 2-D discrete image (in matrix form) can form a 1-D vector by concatenating the rows into one long vector: However, it is usually easier to think about it in terms of the computation for an individual value of g[u, v]: M
29
2-D Transforms: Fourier Transform The 2-D discrete Fourier Transform is given by: where the weight values w u,v have been replaced with
30
Fourier Transform: Examples
31
2-D Filtering A 2-D image f[x, y] can be filtered by convolving (or cross-correlating) it with a 2-D kernel h[x, y] to produce an output image g[u, v]: As with the 1-D case, actual implementation may need to use h[i + k, j + k] instead of h[i, j] to adjust for negative indices Filtering is useful for many reasons such as noise reduction and edge detection
32
Noise Unavoidable/undesirable fluctuation from “correct” value: The nemesis of signal/image processing and computer vision Usually random: modeled as a statistical distribution Mean ( ) at the “correct” value Measured sample varies from according to distribution ( ) Signal-to-Noise Ratio (SNR) = : Measures how “noise free” the acquired signal is “Signal” can refer to absolute or relative value
33
Noise Filtering is useful for noise reduction... Common types of noise: Salt and pepper: random occurrences of black and white pixels Impulse: random occurrences of white pixels Gaussian: intensity variations drawn from a normal distribution What kind of filter (i.e., kernel) reduces noise? Why?
34
Noise Reduction: Mean Filter What does a 3 3 mean (i.e., averaging) kernel look like? What does it do to the salt and pepper noise? What does it do to the edges of the white box? 0000000000 0000000000 00090 00 000 00 000 00 000 0 00 000 00 0000000000 00 0000000 0000000000 f[x, y]g[x, y]
35
Noise Reduction: Mean Filter 0000000000 0000000000 00090 00 000 00 000 00 000 0 00 000 00 0000000000 00 0000000 0000000000 0102030 2010 0204060 4020 0306090 6030 0 5080 906030 0 5080 906030 0203050 604020 102030 2010 00000 f[x, y]g[x, y] 111 111 111 h[x, y]
36
Noise Reduction: Mean Filter 0000000000 0000000000 00090 00 000 00 000 00 000 0 00 000 00 0000000000 00 0000000 0000000000 0102030 2010 0204060 4020 0306090 6030 0 5080 906030 0 5080 906030 0203050 604020 102030 2010 00000 f[x, y]g[x, y] 111 111 111 h[x, y]
37
Mean Filter: Effect 333355557777 Gaussian noise Salt and pepper noise
38
Noise Reduction: Gaussian Filter A Gaussian kernel gives less weight to pixels further from the center of the window This kernel is an approximation of a Gaussian function: 121 242 121 h[x, y]
39
Mean vs. Gaussian Filtering
40
Non-Linear Operations They are often mistakenly called “filters” Strictly speaking, non-linear operators are not filters They can be useful, though Examples: Order statistics (e.g., median filter) Iterative algorithms (e.g., CLEAN) Anisotropic diffusion Non-uniform convolution-like operations
41
Median “Filter” Instead of a local neighborhood weighted average, compute the median of the neighborhood Advantages: Removes noise like low-pass filtering does Value is from actual image values Removes outliers – doesn’t average (blur) them into result (“despeckling”) Edge preserving Disadvantages: Not linear Not shift invariant Slower to compute
42
Comparison: Salt & Pepper Noise 3333 7777 GaussianMeanMedian
43
Comparison: Gaussian Noise 3333 7777 GaussianMeanMedian
44
Edge Detection: Differentiation Recall F and H be the Fourier transforms of f and h Convolution theorem: Convolution in the spatial (image) domain is equivalent to multiplication in the frequency (Fourier) domain Symmetric theorem: Convolution in the frequency domain is equivalent to multiplication in the spatial domain Why is this useful?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.