September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.

Slides:



Advertisements
Similar presentations
5th Intensive Course on Soil Micromorphology Naples th - 14th September Image Analysis Lecture 5 Thresholding/Segmentation.
Advertisements

5th Intensive Course on Soil Micromorphology Naples th - 14th September Image Analysis Lecture 5 Thresholding/Segmentation.
Programming Assignment 2 CS 302 Data Structures Dr. George Bebis.
電腦視覺 Computer and Robot Vision I
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Document Image Processing
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision Lecture 16: Region Representation
Each pixel is 0 or 1, background or foreground Image processing to
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Objective of Computer Vision
Highlights Lecture on the image part (10) Automatic Perception 16
Objective of Computer Vision
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 9 Binary Shape Analysis.
Computer Vision Lecture 3: Digital Images
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Chapter 3 Binary Image Analysis. Types of images ► Digital image = I[r][c] is discrete for I, r, and c.  B[r][c] = binary image - range of I is in {0,1}
Digital Image Processing
Machine Vision for Robots
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
CS 6825: Binary Image Processing – binary blob metrics
CS 376b Introduction to Computer Vision 02 / 22 / 2008 Instructor: Michael Eckmann.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
Morphological Image Processing
Quadtrees, Octrees and their Applications in Digital Image Processing.
Introduction Image geometry studies rotation, translation, scaling, distortion, etc. Image topology studies, e.g., (i) the number of occurrences.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
1 Regions and Binary Images Hao Jiang Computer Science Department Sept. 25, 2014.
1 Regions and Binary Images Hao Jiang Computer Science Department Sept. 24, 2009.
Digital Image Processing CSC331 Morphological image processing 1.
CS654: Digital Image Analysis
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
CSSE463: Image Recognition Day 9 Lab 3 (edges) due Weds, 3:25 pm Lab 3 (edges) due Weds, 3:25 pm Take home quiz due Friday, 4:00 pm. Take home quiz due.
Wonjun Kim and Changick Kim, Member, IEEE
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
Image Segmentation Nitin Rane. Image Segmentation Introduction Thresholding Region Splitting Region Labeling Statistical Region Description Application.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
BYST Morp-1 DIP - WS2002: Morphology Digital Image Processing Morphological Image Processing Bundit Thipakorn, Ph.D. Computer Engineering Department.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Chapter 11 Representation and.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
Morphological Image Processing (Chapter 9) CSC 446 Lecturer: Nada ALZaben.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Digital Image Processing CCS331 Relationships of Pixel 1.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
October 3, 2013Computer Vision Lecture 10: Contour Fitting 1 Edge Relaxation Typically, this technique works on crack edges: pixelpixelpixel pixelpixelpixelebg.
Another Example: Circle Detection
Course : T Computer Vision
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
CSE 554 Lecture 1: Binary Pictures
Mean Shift Segmentation
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 3: Digital Images
Computer Vision Lecture 9: Edge Detection II
Binary Image processing بهمن 92
Computer and Robot Vision I
Department of Computer Engineering
Fourier Transform of Boundaries
Computer and Robot Vision I
Computer and Robot Vision I
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of brightness for each pixel: black or white. Binary images require little memory for storage and can be processed very quickly. They are a good representation of an object if we are only interested in the contour of that object, and we are only interested in the contour of that object, and the object can be separated from the background and from other objects (no occlusion). the object can be separated from the background and from other objects (no occlusion).

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 2Thresholding We usually create binary images from grayscale images through thresholding. This can be done easily and perfectly if, for example, the brightness of pixels is lower for those of the object than for those of the background. Then we can set a threshold  such that  is greater than the brightness value of any object pixel and greater than the brightness value of any object pixel and smaller than the brightness value of any background pixel. smaller than the brightness value of any background pixel.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 3Thresholding In that case, we can apply the threshold  to the original image A[i, j] to generate the thresholded image A  [i, j]: A  [i, j] = 1 if A[i, j] ≤  = 0 otherwise The convention for binary images is that pixels belonging to the object(s) have value 1 and all other pixels have value 0. We usually display 1-pixels in black and 0-pixels in white.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 4Thresholding If we know that the intensity of all object pixels is in the range between values  1 and  2, we can perform the following thresholding operation: A  [i, j] = 1 if  1 ≤ A[i, j] ≤  2 = 0 otherwise If the intensities of all object pixels are not in a particular interval, but are still distinct from the background values, we can do the following: A Z [i, j] = 1 if A[i, j]  Z = 0 otherwise, Where Z is the set of intensities of object pixels.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 5Thresholding Here, the right image is created from the left image by thresholding, assuming that object pixels are darker than background pixels. As you can see, the result is slightly imperfect (dark background pixels).

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 6Thresholding How to find the optimal threshold?

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 7Thresholding Intensity histogram

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 8Thresholding Thresholding result

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 9 Some Definitions For a pixel [i, j] in an image, … [i, j] [i-1, j] [i, j-1] [i, j+1] [i+1, j] …these are its 4-neighbors (4-neighborhood). [i, j] [i-1, j] [i, j-1] [i, j+1] [i+1, j] …these are its 8-neighbors (8-neighborhood). [i-1, j-1] [i-1, j+1] [i+1, j-1] [i+1, j+1]

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 10 Some Definitions A path from the pixel at [i 0, j 0 ] to the pixel [i n, j n ] is a sequence of pixel indices [i 0, j 0 ], [i 1, j 1 ], …, [i n, j n ] such that the pixel at [i k, j k ] is a neighbor of the pixel at [i k+1, j k+1 ] for all k with 0 ≤k ≤ n – 1. If the neighbor relation uses 4-connection, then the path is a 4-path; for 8-connection, the path is an 8- path. 4-path8-path

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 11 Some Definitions The set of all 1-pixels in an image is called the foreground and is denoted by S. A pixel p  S is said to be connected to q  S if there is a path from p to q consisting entirely of pixels of S. Connectivity is an equivalence relation, because Pixel p is connected to itself (reflexivity). If p is connected to q, then q is connected to p (symmetry). If p is connected to q and q is connected to r, then p is connected to r (transitivity).

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 12 Some Definitions A set of pixels in which each pixel is connected to all other pixels is called a connected component. The set of all connected components of –S (the complement of S) that have points on the border of an image is called the background. All other components of –S are called holes. 4-connectedness: 4 objects, 1 hole 8-connectedness: 1 object, no hole To avoid ambiguity, use 4-connectedness for foreground and 8-connectedness for background or vice versa.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 13 Some Definitions The boundary of S is the set of pixels of S that have 4-neighbors in –S. The boundary is denoted by S’. The interior is the set of pixels of S that are not in its boundary. The interior of S is (S – S’). Region T surrounds region S (or S is inside T), if any 4-path from any point of S to the border of the picture must intersect T. original image boundary, interior, surround

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 14 Component Labeling Component labeling is one of the most fundamental operations on binary images. It is used to distinguish different objects in an image, for example, bacteria in microscopic images. We find all connected components in an image and assign a unique label to all pixels in the same component

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 15 Component Labeling A simple algorithm for labeling connected components works like this: Scan the image to find an unlabeled 1-pixel and assign it a new label L Recursively assign a label L to all its 1-pixel neighbors Stop if there are no more unlabeled 1-pixels Go to step 1. However, this algorithm is very inefficient. Let us develop a more efficient, non-recursive algorithm.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 16 Component Labeling 1. 1.Scan the image left to right, top to bottom If the pixel is 1, then If only one of its upper and left neighbors has a label, then copy the label. If both have the same label, then copy the label. If both have different labels, then copy the upper neighbor’s label and enter both labels in the equivalence table as equivalent labels. Otherwise assign a new label to this pixel and enter this label in the equivalence table If there are more pixels to consider, then go to Step Find the lowest label for each equivalence set in the equivalence table Scan the picture. Replace each label by the lowest label in its equivalence set.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 17 Size Filter We can use component labeling to remove noise in binary images. For example, when we want to perform optical character recognition (OCR), it often happens that there are small groups of 1-pixels outside the actual characters. Since these are usually very small, isolated blobs, we can remove them by applying a size filter, that is, labeling all components, computing their size, and for all components smaller than a threshold , setting all of their pixels to 0.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 18 Size Filter Here, for  = 10, the size filter perfectly removes all noise in the input image.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 19 Size Filter However, if our threshold is too high, “accidents” may happen.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 20 Size Filter In the case we had only “positive noise,” that is, there were some 1-pixels in places that should have contained 0-pixels, Often, we also have “negative noise,” which means that we have 0-pixels in places that should contain 1- pixels. To remove negative noise, we could define a “hole size filter” that removes all holes that are smaller than a certain threshold. A common, efficient method of removing both kinds of noise is to apply sequences of expanding and shrinking.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 21 Expanding and Shrinking As we have just seen, using a size filter is one method for preprocessing images for subsequent character recognition. Another common way of achieving this is called expanding and shrinking. Expanding operation: For all pixels in the image, change a pixel from 0 to 1 if any neighbors of the pixel are 1. Shrinking operation: For all pixels in the image, change a pixel from 1 to 0 if any neighbors of the pixel are 0.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 22 Expanding and Shrinking Here, the original image (left) is expanded (center) or shrunken (right). Shrinking can actually be considered as expanding the background.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 23 Expanding and Shrinking Expanding followed by shrinking can be used for filling undesirable holes. Shrinking followed by expanding can be used for removing isolated noise pixels. If the resolution of the image is sufficiently high and the noise level is low, expanding – shrinking – shrinking – expanding sequence may be able to do both tasks. Of course we always have to perform the same total number of expanding and shrinking operations.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 24 Expanding and Shrinking Original image Result of expanding followed by shrinking Result of shrinking followed by expanding

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 25 Expanding and Shrinking

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 26Compactness For a two-dimensional continuous geometric figure, its compactness is measured by the quotient P 2 /A, where p is the figure’s perimeter and A is its area. For example, for a square of height s we have P = 4s and A = s 2, so its compactness is 16. For a circle of radius r we have P = 2  r and A =  r 2, so its compactness is 4   No figure is more compact than a circle, so 4  is the minimum value for compactness. Notice: The more compact a figure is, the lower is its compactness value.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 27Compactness For the computation of compactness, the perimeter of a connected component can be defined in different ways: The sum of lengths of the “cracks” separating pixels of S from pixels of –S. A crack is a line that separates a pair of pixels p and q such that p  S and q  -S. The number of steps taken by a boundary- following algorithm. The number of boundary pixels of S.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 28 Geometric Properties Let us say that we want to write a program that can recognize different types of tools in binary images. Then we have the following problem: The same tool could be shown in different sizes, sizes, positions, and positions, and orientations. orientations.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 29 Geometric Properties

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 30 Geometric Properties We could teach our program what the objects look like at different sizes and orientations, and let the program search all possible positions in the input. However, that would be a very inefficient and inflexible approach. Instead, it is much simpler and more efficient to standardize the input before performing object recognition. We can scale the input object to a given size, center it in the image, and rotate it towards a specific orientation.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 31 Computing Object Size The size A of an object in a binary image B is simply defined as the number of black pixels (“1-pixels”) in the image: A is also called the zeroth-order moment of the object. In order to standardize the size of the object, we expand or shrink the object so that its size matches a predefined value.

September 23, 2014Computer Vision Lecture 5: Binary Image Processing 32 Computing Object Position We compute the position of an object as the center of gravity of the black pixels: These are also called the first-order moments of the object. In order to standardize the position of the object, we shift its position so that it is in the center of the image.