IMAGE REPRESENTATION 1. 2 IMAGE REPRESENTATION MEANS CONVERSION OF ANALOG IMAGE INTO DIGITAL IMAGE THIS CONVERSION OF ANALOG(3D) TO DIGITAL (2D) TAKES.

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computational Biology, Part 23 Biological Imaging II Robert F. Murphy Copyright  1996, 1999, All rights reserved.
Document Image Processing
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
EE 7730 Image Segmentation.
Objective of Computer Vision
Preprocessing ROI Image Geometry
Objective of Computer Vision
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Chapter 10: Image Segmentation
Chapter 3 Binary Image Analysis. Types of images ► Digital image = I[r][c] is discrete for I, r, and c.  B[r][c] = binary image - range of I is in {0,1}
Digital Image Processing
Machine Vision for Robots
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 11 Representation & Description Chapter 11 Representation.
Digital Image Processing Lecture 20: Representation & Description
Lecture 5. Morphological Image Processing. 10/6/20152 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of animals.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
CS 6825: Binary Image Processing – binary blob metrics
ENT 273 Object Recognition and Feature Detection Hema C.R.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
EECS 274 Computer Vision Segmentation by Clustering II.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Copyright Howie Choset, Renata Melamud, Al Costa, Vincent Lee-Shue, Sean Piper, Ryan de Jonckheere. All Rights Reserved Computer Vision.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Image Registration Advanced DIP Project
1 Machine Vision. 2 VISION the most powerful sense.
A Tutorial on using SIFT Presented by Jimmy Huff (Slightly modified by Josiah Yoder for Winter )
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CDS 301 Fall, 2008 Image Visualization Chap. 9 November 11, 2008 Jie Zhang Copyright ©
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Morphological Image Processing Robotics. 2/22/2016Introduction to Machine Vision Remember from Lecture 12: GRAY LEVEL THRESHOLDING Objects Set threshold.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Chapter 11 Representation and.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
Morphological Image Processing
Sheng-Fang Huang Chapter 11 part I.  After the image is segmented into regions, how to represent and describe these regions? ◦ In terms of its external.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Image Representation and Description – Representation Schemes
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
Digital Image Processing Lecture 20: Representation & Description
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Mean Shift Segmentation
Feature description and matching
Computer Vision Lecture 5: Binary Image Processing
Fitting Curve Models to Edges
Computer Vision Lecture 9: Edge Detection II
CS Digital Image Processing Lecture 5
Image Segmentation Image analysis: First step:
Computer and Robot Vision I
Digital Image Processing
Feature descriptors and matching
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

IMAGE REPRESENTATION 1

2 IMAGE REPRESENTATION MEANS CONVERSION OF ANALOG IMAGE INTO DIGITAL IMAGE THIS CONVERSION OF ANALOG(3D) TO DIGITAL (2D) TAKES PLACE IN 3 STAGES 1) SAMPLING 2) QUANTIZING 3) CODING

3 SAMPLING means measuring the value of an image at a finite number of points. Sampling is done in two directions,ie x & y and we get a resolution of m*n pixels. The Three conditions that are to be considered while sampling an analog image are: 1) The sampling rate has to be very high. 2)while sampling the aspect ratio has to be has to be maintained. 3) The nyquist rate has to be maintained. At the end of the Sampling stage we obtain a discrete image which is given by I0(k,j) where k &j represents the x & y axis respectively.

The discrete image obtained is then quantized. Quantization is discretizing the analog image in brightness and is also called amplitude digitization. Quantized image is represented by a precision of ‘b’ binary digits. This results in 2^b intensity values or shades of gray/brightness. 4

In CODING STAGE the digital image I(k,j) consisting of a number of pixels is further coded or packed. Patterns of bits consists of 0’s and 1’s. Now the image of size(1KB,1MB) can be used for storing or transmisssion. This stage gives the Actual digital image. 5

Template Matching

Template Matching in Images General Idea of Template Matching –create a small “template” image of the feature we want to detect –center the template image over pixel(i,j) in a target image –calculate the pixel differences between the template image and the pixels centered at (i,j) if the distance is small, the template is similar to the window centered at (i,j) if the distance is large, the template is dissimilar –repeat for all pixels –the minimum distance is the best “match” –this is known as “template matching”

Source Image Representation Source Image : S(x, y)

Template Representation Template Image : T(x t, y t )

Procedure for Template Matching Template Target Image

Example Minimum Distance : Diff(x s, y s, x t, y t ) = | I s (x s, y s ) – I t (x t, y t ) |.

Polyhedral Objects

Edge Detection Reflected light intensity is constant over surface of objects (if surfaces are smooth, opaque, and lighting is uniform) At edges there is sudden change in light intensity, where the intensity changes from one grayscale level to another. The concept of jump discontinuity in light intensity at edges is used to locate edges of polyhedral object.

Gradient operators are used to determine edges in polyhedral objects. It is a vector having both magnitude and direction. At surfaces of objects, gradient of light intensity is uniform,constant. At edges of objects, gradient of light intensity is infinity.

Selection of edge threshold

Magnitude of gradient vector is used to determine edges of an object. Angle of gradient vector is used to determine which side of edge corresponds to which object in a image. The above two criteria are collectively called as the Robert cross operators.

Corner Point Detection Obtain analog image of 3D object Digitise it and give it to EDA Obtain a binary image L(k,j) as output Scan the image with 8 standard corner point templates using NCC If sigma(x,y) for some i is one, then the pixel is a corner pixel

Shape Analysis

Types of Descriptors Line Descriptors Area Descriptors

Line Descriptor Chain Coding Technique Chain coding  Relative Representation and Invariant to Translation 321 4P0 567

Algorithm Search for the Rightmost Foreground Pixel. Search for the Nearest Neighbor in Anti- clockwise direction. Write the Direction number as the Chain code.

Area Descriptor Descriptor based on analysis of points enclosed by the boundary. Moment - Sum of product of integer power’s of row and column number of foreground pixel. Moment is variant to Translation, Rotation and Scaling.

Formula Central Moment :- Moment :- Normalized Central Moment :-

Advantages of Area Descriptor over Line Descriptor Robust to Large Change in Boundary Large number of points are considered Less sensitive to Change in Boundary

Segmentation

It is the process by which items in an image are separated from background The set of all connected pixels with a particular gray level attribute is identified as a region and is assigned label

Segmentation Segmentation algorithms are generally based on one of the two basic properties –Discontinuities in intensity –Similarities in intensity Methods to achieve segmentation –Thresholding –Region labeling

THRESHOLDING

Simplest way of Segmentation Converts Gray level image into Binary Black-&-White image Thresholding works well if the object of interest has a uniform gray level and rest upon a background of different, but uniform, gray level

THRESHOLDING Process: Plot Image Histogram Select Threshold value All pixels at or above the threshold gray level are assigned to the object All pixels with gray level below the threshold fall outside the object

THRESHOLDING if r: row, c: column I: grayscale intensity, B: binary intensity T: intensity threshold

THRESHOLDING

REGION LABELLING

 Producing a labeled image from a segmented binary image is to use something called region growing.  Binary image is scanned for an unlabeled foreground pixel.  Each time one is found it serves as a seed for growing a new foreground region.

Algorithm for region labeling 1.Initialize k=1, j=1 2.If I(k,j)=1, set I(k,j)=255 3.Set k=k+1.If K≤m go to step 2 4.Set k=1, j=j+1. If j ≤n go to step 2 5.Initialize k=1, j=1, i=0 6.If I(k,j)=255 (a) set i=i+1 (b) Grow region I from seed (k,j) using region growing algo 7.Set k=k+1. If k ≤m go to step 6 8.Set k=1, j=j+1. If j ≤n go to step 6

Algorithm for region growing 1. Set I(k,j)=I, push(k,j) push(0,0) 2. If j<n & I(k,j+1)=255 (a) Set I(j+1)=I (b) Push (k,j+1) 3. If k>1 & I(k-1,j)=255, (a) Set I(k-1,j)=I (b) Push(k-1,j) 4. If j<1 & I(k,j-1)=255 (a) set I(k,j-1)=I (b) Push(k, j-1) 5. If k<m & I(k+1, j)=255 (a) Set I(k+1,j)=I (b) push(k+1, j) 6. pop(k,j). If (k,j)≠0 go to step 2 7. pot(k,j), return

SHRINK AND SWELL OPERATORS BY PREETI RUPANI PRIYADARSHANI SAFAYA

ITERATIVE PROCESSING Iterative processing of images is defined as the removal of stray noises in the images. It makes use of iterative operations such as the shrink and swell operators. i.e. smoothening of images.

SHRINK OPERATOR

Let I(k,j) be a (mxn) binary image. p(k,j) is a pixel function at spatial coordinates (k,j) & has 8 neighbors. This is given by: p(k,j)=[ Σ Σ I(k+u, j+v)]- I(k,j) where, l<k<m, l<j<n

SHRINK OPERATOR Shrink (i) I(k,j)=I(k,j) AND 1(i-1-[8-p(k,j)]); where, 0<=i<=8 [8-p(k,j)]→the no of background pixels i.e. zeroes surrounding p(k,j) p(k,j) is the pixel function. 1( ) is the unit step function.

KEY POINTS The ith shrink operator is monotonic in nature. Shrink operator converges in the finite steps. It smoothes out rough edges and removes of the noise. It eliminates the small region from the background.

SWELL OPERATOR

Swell (i)I(k,j) = I(k,j) OR 1[ p(k,j) - i ] ; 0 ≤ i ≤ 8 Here, p(k,j) is the pixel function. i th swell operator turns pixel p(k,j) into a 1 if it has at least i neighbor with value, else it leaves p(k,j) unchanged.

KEY POINTS Swell operator is an iterative operation for smoothening of images. i th Swell operator is monotonic in nature. Swell operator converges in finite steps. It fills small inlets or removes holes from a foreground object area. It is dual of shrink operator.

STEPS IN IMAGE ANALYSIS 1. Apply threshold to a gray scale image. 2. Binary scale image segmented into background and foreground areas is obtained. 3. Iterative shrink (i) operator is applied for some I>4. 4. Shrink operator converges when total foreground area remains the same after last application.

STEPS CONTINUED… 5. Similar to shrink operator, swell operator is applied till convergence occurs. 6. Smoothed, distinct foreground regions are labeled. 7. Moments are calculated for each region R. 8. Using the moments. Area, centroid and the principal angle are calculated.

Euler’s Number The Euler Number of an image is defined as number of parts minus the number of holes. EulerNumber = #Objects - #Holes

Examples

Parts and Holes A part is a connected foreground region. A hole is am isolated background region enclosed by part.

Properties Eulers numbers follows the additive set property.

Is Euler Number a good Shape Descriptor ?? The Eulers numbers is not a good shape descriptor. Characterize the connected components by describing the edges.

Applications Vast applications in the areas of machine vision, automatic borders detection and image understading. Fast easy and reliable method of calculating eulers numbers can increase the performance of the system.

Perspective Transformation Perspective Projection Two types Direct perspective transformation Inverse perspective transformation

Perspective Projection Projection is actually a term used when you’re changing the dimension of something. Perspective is something that refers to the way an object’s size is affected by distance. The farther away an object moves, the smaller it gets: that’s the effect of perspective

Transformation With the two definitions by your side, you can probably tell that when I say “perspective projection transformation”, I’ve talking about a transformation that will do both at the same time. i.e- “Given the coordinates of the object with respect to camera frame, to find the coordinates of its image with respect to camera frame, the transformation technique used is called perspective transformation.

Inverse perspective transformation “Given the coordinates of image with respect to camera frame, to find the coordinates of the object with respect to camera frame, the transformation technique used is called inverse perspective transformation.” Inverse Perspective Transformation is more useful than perspective Transformation –As the robotic controller uses inverse transformation to find the coordinates of object itself w.r.t to the camera frame.