Download presentation
Presentation is loading. Please wait.
Published byReese Gowland Modified over 10 years ago
1
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović http://mi.eng.cam.ac.uk/~oa214/
2
Lecture Roadmap Face geometry Lecture 1: Geometric image transformations Lecture 2: Colour and brightness enhancement Lecture 3: Denoising and image filtering
3
– Image Denoising and Filtering –
4
Image Noise Sources Image noise may be produced by several sources: Quantization Photonic Thermal Electric
5
Denoising To effectively perform denoising, we need to consider the following issues: Signal (uncorrupted image) model Typically piece-wise constant or linear Noise model (from the physics of image formation) Additive or multiplicative, Gaussian, white, salt and pepper…
6
Salt and Pepper Noise
7
Gaussian Noise
8
Modelling Noise Most often noise is additive: Observed pixel luminance True luminanceNoise process
9
Additive Gaussian Noise – Example A clear original image was corrupted by additive white Gaussian noise: Original, uncorrupted imageAdditive Gaussian noise
10
Additive Gaussian Noise – Example A clear original image was corrupted by additive white Gaussian noise: Additive Gaussian noise
11
Additive Gaussian Noise – Example Taking a slice through the image can help us visualize the behaviour of noise better:
12
Temporal Average for Video Denoising A video feed of a static scene can be easily denoised by temporal averaging, under the assumption of zero-mean additive noise: Pixel luminance estimate Pixel luminance in frame i Average noise energy is reduced by a factor of N:
13
Temporal Averaging – Example Consider our noisy CCTV image from the previous lecture and the result of brightness enhancement: Original imageBrightness enhanced image
14
Temporal Averaging – Example The effect of temporal averaging over 100 frames is dramatic: But note that moving objects cause blur. The clarity of image detail is much improved.
15
Spatial Averaging Although attractive, a static video feed is usually not available. However, a similar technique can be used by noting: Images are mostly smoothly varying Original smoothly varying signal and the signal corrupted with zero mean Gaussian noise
16
Simple Spatial Averaging Thus, we can attempt to denoise the signal by simple spatial averaging: The result of averaging each neighbouring 7 (± 3) pixels
17
Simple Spatial Averaging – Example Using out synthetically corrupted image: Additive Gaussian noise Spatially averaged using 5 х 5 neighbourhood
18
Simple Spatial Averaging – Example Consider the difference between the uncorrupted image and the corrupted and denoised images: Before averaging After averaging RMS difference = 29RMS difference = 12
19
Simple Spatial Averaging – Analysis The result of averaging looks good, but a closer inspection reveals some loss of detail: Difference imageMagnified patch
20
Simple Spatial Averaging – Analysis To formally analyze the filtering effects, rewrite the original averaging expression: Rectangular pulse Convolution integral
21
1D Convolution A quick convolution re-cap: f(x)h(x) Flip and slide over
22
Discrete 1D Convolution In dealing with discrete signals: Flip and slide over …234233228240241 … 001221 f(x)h(x) …234233228240241 … 122100122100 228+ 480+ 482+ 241 + …
23
2D Convolution The concept of linear filtering as convolution with a filter (or kernel) extends to 2D and the integral becomes: We shall be dealing with separable filters only in which this is equivalent to two 1D convolutions:
24
Simple Spatial Averaging – Analysis By considering the effects of convolution in the frequency domain, we can now see why there was loss of detail: Rectangular pulse function Fourier transform The sinc function High frequencies are damped
25
White Noise Model This insight allows to devise the denoising filter in a principled way by considering the SNR over different frequencies: Signal frequency spectrum Noise frequency spectrum Frequency Energy
26
Frequency Energy White Noise Model This insight allows to devise the denoising filter in a principled way by considering the SNR over different frequencies: Pass Do not pass
27
The Ideal LPF Again As when we dealt with reconstructing a signal from a set of samples, we can low-pass filter by convolving with the sinc function in the spatial domain: The key limitation is that the sinc function has a wide spatial support Thus, in practice we often use filters that offer a better trade-off in terms of spatial support and bandwidth
28
Gaussian Low Pass Filter The Gaussian LPF is one of the most commonly used LPFs. It possesses the attractive property of minimal space-bandwidth product. 1D Gaussian2D Gaussian as a surface 2D Gaussian as an image
29
Gaussian LPF – Toy Example Using the Gaussian filter on our toy 1D example produces a nearly perfect filtering result: RMS error reduction from 0.1 to 0.02
30
Gaussian LPF – Example Using out synthetically corrupted image: Additive Gaussian noise LP filtered using a Gaussian with
31
Low, Band and High-Pass Filters A quick recap of relevant terminology: Frequency Gain Low-passBand-passHigh-pass
32
Low, Band and High-Pass Filters A summary of main uses: Low-pass: denoising High-pass: removal of non-informative low frequency components Band-pass: combination of low-pass and high-pass filtering effects
33
Gaussian High-Pass Filter A high pass filter can be simply constructed from the Gaussian LPF: Convolution with the delta function leaves the function unchanged High-pass filterLow-pass filter
34
Gaussian HPF – Toy Example Consider the effects of high pass filtering our 1D toy example: Original signalHigh-pass filter output The result is not dependent on the signal mean Maximal responses around discontinuities
35
Gaussian HPF – Example Consider the effects of high pass filtering an image: Original imageHigh-pass filtered image Information rich intensity discontinuities are extracted.
36
High Frequency Image Content An example of the importance of high-frequency content: ? + Low-pass filterHigh-pass filter
37
High Frequency Image Content And the result of the experiment is…
38
HPFs in Face Recognition High-pass filters are used in face recognition to achieve quasi-illumination invariance: Original image of a localized face High-pass filtered
39
Filter Design – Matched Filters Consider the convolution sum of a discrete signal with a particular filter: When is the filter response maximal? …234233228240241 … 122100122100 228+ 480+482+ 241 + …
40
Filter Design – Matched Filters The summation is the same as for vector dot product: The response is thus maximal when the two vectors are parallel i.e. when the filter matches the local patch it overlaps. …234233228240241 … 122100
41
Filter Design – Intensity Discontinuities Using the observation that maximal filter response is exhibited when the filter matches the overlapping signal, we can start designing more complex filters: Kernel with maximal response to intensity edges 0.50.0-0.5
42
Filter Design – Intensity Discontinuities Better yet, perform Gaussian smoothing to suppress noise first: Noise suppressing kernel with high response to intensity edges Gaussian kernel
43
Unsharp Masking Enhancement The main principle of unsharp masking is to extract high frequency information and add it onto the original image to enhance edges: image HPF + output Original edge Enhanced
44
Unsharp Masking Enhancement Unsharp mask filtering performs noise reduction and edge enhancement in one go, by combining a Gaussian LPF with a Laplacian of Gaussian kernel: Gaussian smoothingConvolution with –ve Laplacian of Gaussian += Result
45
Unsharp Masking – Example Consider the following synthetic example: Gaussian smoothed then corrupted with Gaussian noise
46
Unsharp Masking – Example After unsharp masking: Gaussian smoothed then corrupted with Gaussian noise
47
– That is All for Today –
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.