7. Neighbourhood operations A single pixel considered in isolation conveys information on the intensity and colour at a single location in an image, but.

Slides:



Advertisements
Similar presentations
Convolution. Why? Image processing Remove noise from images (e.g. poor transmission (from space), measurement (X-Rays))
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Linear Filtering – Part I Selim Aksoy Department of Computer Engineering Bilkent University
Edge enhancement by linear (and nonlinear) filtering Dr. Dileepan Joseph Dept. of Engineering Science University of Oxford, UK.
Image Processing Lecture 4
Chapter 3 Image Enhancement in the Spatial Domain.
Image Pre-Processing Image pre-processing (předzpracování obrazu) –transformations of the input image leading to noise reduction, image improvement or.
Sliding Window Filters and Edge Detection Longin Jan Latecki Computer Graphics and Image Processing CIS 601 – Fall 2004.
Digital Image Processing
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
Multimedia communications EG 371Dr Matt Roach Multimedia Communications EG 371 and EE 348 Dr Matt Roach Lecture 6 Image processing (filters)
Lecture 4 Linear Filters and Convolution
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Low Pass Filtering Spatial frequency is a measure of how rapidly brightness or colour varies as we traverse an image. Figure 7.11a shows that an image.
5. Halftoning Newspaper photographs simulate a greyscale, despite the fact that they have been printed using only black ink. A newspaper picture is, in.
8. Geometric Operations Geometric operations change image geometry by moving pixels around in a carefully constrained way. We might do this to remove distortions.
CS 376b Introduction to Computer Vision 02 / 27 / 2008 Instructor: Michael Eckmann.
Digital Image Processing
Edge Detection Lecture 2: Edge Detection Jeremy Wyatt.
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Digital Image Processing, 2nd ed. www. imageprocessingbook.com © 2001 R. C. Gonzalez & R. E. Woods 1 Objective To provide background material in support.
1 Lecture 12 Neighbourhood Operations (2) TK3813 DR MASRI AYOB.
1 Image filtering Images by Pawan SinhaPawan Sinha.
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
02/12/02 (c) 2002 University of Wisconsin, CS 559 Filters A filter is something that attenuates or enhances particular frequencies Easiest to visualize.
Lecture 1: Images and image filtering CS4670/5670: Intro to Computer Vision Kavita Bala Hybrid Images, Oliva et al.,
Multimedia Systems & Interfaces Karrie G. Karahalios Spring 2007.
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
University of Ioannina - Department of Computer Science Intensity Transformations (Point Processing) Christophoros Nikou Digital Image.
Basis beeldverwerking (8D040) dr. Andrea Fuster Prof.dr. Bart ter Haar Romeny dr. Anna Vilanova Prof.dr.ir. Marcel Breeuwer Convolution.
Digital Image Processing Lecture12: Basics of Spatial Filtering.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
CSC508 What You Should Be Doing Code, code, code –Programming Gaussian Convolution Sobel Edge Operator.
Convolution and Filtering
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
Digital Image Processing Lecture 10: Image Restoration
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Chapter 3 Intensity Transformations.
Image Subtraction Mask mode radiography h(x,y) is the mask.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
CSC508 Convolution Operators. CSC508 Convolution Arguably the most fundamental operation of computer vision It’s a neighborhood operator –Similar to the.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
7- 1 Chapter 7: Fourier Analysis Fourier analysis = Series + Transform ◎ Fourier Series -- A periodic (T) function f(x) can be written as the sum of sines.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
Non-linear Filters Non-linear filter (nelineární filtr) –spatial non-linear operator that produces the output image array g(x,y) from the input image array.
Fourier Transform.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Digital Image Processing, 2nd ed. www. imageprocessingbook.com © 2001 R. C. Gonzalez & R. E. Woods 1 Review: Linear Systems Some Definitions With reference.
Digital Filters. What are they?  Local operation (neighborhood operation in GIS terminology) by mask, window, or kernel (different words for the same.
Filtering (II) Dr. Chang Shu COMP 4900C Winter 2008.
Sliding Window Filters Longin Jan Latecki October 9, 2002.
Filters– Chapter 6. Filter Difference between a Filter and a Point Operation is that a Filter utilizes a neighborhood of pixels from the input image to.
Image Enhancement in the Spatial Domain.
Digital Image Processing, Assoc. Prof. Dr. Setyawan Widyarto 1-1 Convolution.
Basic Principles Photogrammetry V: Image Convolution & Moving Window:
Basis beeldverwerking (8D040) dr. Andrea Fuster dr. Anna Vilanova Prof
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Pre-Processing in the Spatial and Frequent Domain
CS-565 Computer Vision Nazar Khan Lecture 4.
Histogram Histogram is a graph that shows frequency of anything. Histograms usually have bars that represent frequency of occuring of data. Histogram has.
9th Lecture - Image Filters
Digital Image Processing Week IV
Non-local Means Filtering
Lecture 2: Image filtering
Linear Systems Review Objective
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Fundamentals of Spatial Filtering
Image Enhancement in Spatial Domain: Neighbourhood Processing
پردازش تصاویر دیجیتالی
Presentation transcript:

7. Neighbourhood operations A single pixel considered in isolation conveys information on the intensity and colour at a single location in an image, but it can tell us nothing about the way in which these properties vary spatially. The point processes described in the preceding chapter, which change a pixel's value independently of all other pixels, cannot be used to investigate or control spatial variations in image intensity or colour.

For this, we need to perform calculations over areas of an image; in other words, a pixel's new value must be computed from its old value and the values of pixels in its vicinity. These neighbourhood operations are invariably more costly than simple point processes, but they allow us to achieve a whole range of interesting and useful effects.

In this chapter, we consider several neighbourhood operations which include Convolution, Linear Filter, and Edge Detection. These operations employ same approach in which a pixel's new value is a weighted sum of its old value and those of its neighbourhood. We also assume that these operations are being performed on greyscale images.

Convolution Convolution is the fundamental neighbourhood operations of image processing. In convolution, the calculation performed at a pixel is a weighted sum of grey levels from a neighbourhood surrounding a pixel (including the pixel under consideration). Clearly, if a neighbourhood is centred on a pixel, then it must have odd dimensions, e.g., 3 x 3, 5 x 5, etc.

Grey levels taken from the neighbourhood are weighted by coefficients that come from a matrix or convolution kernel. The kernel's dimensions define the size of the neighbourhood in which calculations take place. Usually, the kernel dimensions of 3 x 3 are the most common. Figure 7.1 shows a 3 x 3 kernel and the corresponding 3 x 3 neighbourhood of pixels from an image. The kernel is centred on the shaded pixel.

Y X Fig.7.1. A 3 x 3 convolution kernel and the corresponding image neighbourhood.

The result of convolution will be a new value for this pixel. During convolution, we take each kernel coefficient in turn and multiply it by a value from the neighbourhood of the image lying under the kernel. We apply the kernel to the image in such a way that the value at the top ‑ left corner of the kernel is multiplied by the value at the bottom-right corner of the neighbourhood.

Denoting the kernel by h and the image by f, the entire calculation is g(x, y) = h(0,0) f (X + 1, Y + 1) + h(1,0) f (X, Y + 1) + h(2,0) f (X ‑ 1, Y + 1) + h(0,1) f(X+1,Y) + h(1,1) f (X, Y) + h(2,1) f (X ‑ 1, Y) + h(0,2) f (X + 1, Y ‑ 1) + h(1,2) f (X, Y ‑ 1) + h(2,2) f (X ‑ 1, Y ‑ 1)

This summation can be expressed as 1 1 g (x, y) = ∑ ∑ h (j+1, k+1) f (x ‑ j, y ‑ k). k=-1 j=-1 For the kernel and neighbourhood illustrated in Figure 7. 1, the result of convolution is g(x, y) = ( ‑ 1 x 82) + (1 x 88) + ( ‑ 2 x 65) + (2 x 76) + ( ‑ 1 x 60) + (1 x 72) = 40

Algorithm 7.1. Convolution of an image using a 3 x 3 convolution kernel. Create an M x N input image, f Create a kernel, h, with dimension 3 x 3 Let g be an output image of M x N. /* setting the borders pixels to zero */ for all pixel coordinates, x & y, do g(x,y) = 0 end for

for y = 1 to N - 2 do for x = 1 to M - 2 do sum = 0 sum = h(0,0)f(x+1,y+1) + h(1,0)f(x,y+1) + h(2,0)f(x-1,y+1) + h(0,1)f(x+1,y) + h(1,1)f(x,y) + h(2,1)f(x-1,y) + h(0,2)f(x+1,y-1) + h(1,2)f(x,y-1) + h(2,2)f(x-1,y-1)

g(x,y) = sum /* when convolution produces a negative result, the output is set to zero. Similarly, when convolution produces a result exceeding 255, the output is fixed at 255 */ if sum < 0 then g(x,y) = 0 else if sum > 255 then g(x,y) = 255 end if end for