COMP322/S2000/L201 Recognition: Object Descriptor Example: A binary image, object is indicated by one’s 0 1 2 3 4 5 6 7 8 0 0 0 0 0 0 0 0 0 0Run Length.

Slides:



Advertisements
Similar presentations
Binary Shape Clustering via Zernike Moments
Advertisements

電腦視覺 Computer and Robot Vision I
11.5 Rotations. Rotations Rotate a Figure 90 about the origin.
Warm Up Draw an example of a reflection: Draw an example of a figure that has one or more lines of symmetry: Find the new coordinates of the image after.
Denavit-Hartenberg Convention
Each pixel is 0 or 1, background or foreground Image processing to
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Binary Image Analysis: Part 2 Readings: Chapter 3: mathematical morphology region properties region adjacency 1.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
COMP322/S2000/L211 Relationship between part, camera, and robot For any object (part) on a robot pick up table, any point on the part can be given in three.
Sharif University of Technology A modified algorithm to obtain Translation, Rotation & Scale invariant Zernike Moment shape Descriptors G.R. Amayeh Dr.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Rotations and Translations. Representing a Point 3D A tri-dimensional point A is a reference coordinate system here.
Chapter 11 Representation and Description. Preview Representing a region involves two choices: In terms of its external characteristics (its boundary)
ERT 146 ENGINEERING MECHANICS
Applications of Double Integrals
2.4: Rotations.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Digital Image Processing
Center of Mass and Moments of Inertia
Machine Vision for Robots
Lecture 06 Binary Image Analysis Lecture 06 Binary Image Analysis Mata kuliah: T Computer Vision Tahun: 2010.
Analysis of shape Biomedical Image processing course, Yevhen Hlushchuk and Jukka Parviainen.
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Chapter 11 Representation & Description Chapter 11 Representation.
Digital Image Processing Lecture 20: Representation & Description
Recap CSC508.
Image Processing and Analysis (ImagePandA) 9 – Shape Christoph Lampert / Chris Wojtan.
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
ENT 273 Object Recognition and Feature Detection Hema C.R.
Unit 5: Geometric Transformations.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Generalized Hough Transform
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Image Processing, 2nd ed. © 2002 R. C. Gonzalez & R. E. Woods Representation & Description.
COMP322/S2000/L91 Direct Kinematics- The Arm Equation Link Coordinates and Kinematics Parameters (Cont‘d) Another example: A 5-axis articulated robot (Rhino.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
A survey of different shape analysis techniques 1 A Survey of Different Shape Analysis Techniques -- Huang Nan.
Fourier Descriptors For Shape Recognition Applied to Tree Leaf Identification By Tyler Karrels.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Problem For the 5 x 3 x -in. angle cross
CS654: Digital Image Analysis Lecture 36: Feature Extraction and Analysis.
Translations Translations maintain Same Size Same Shape
CSSE463: Image Recognition Day 9 Lab 3 (edges) due Weds, 3:25 pm Lab 3 (edges) due Weds, 3:25 pm Take home quiz due Friday, 4:00 pm. Take home quiz due.
1 Overview representing region in 2 ways in terms of its external characteristics (its boundary)  focus on shape characteristics in terms of its internal.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Types of Rigid Motion Translation Rotation Reflection Objective - To describe and interpret translations and reflections in the coordinate plane.
 After an image has been segmented into regions by methods such as those discussed in image segmentation chapter, the segmented pixels usually are represented.
Sheng-Fang Huang Chapter 11 part I.  After the image is segmented into regions, how to represent and describe these regions? ◦ In terms of its external.
Materi 09 Analisis Citra dan Visi Komputer Representasi and Deskripsi 1.
8-7 Transformation Objective: Students recognize, describe, and show transformation.
Dilations A dilation is a transformation that produces an image that is the same shape as the original, but is a different size. A dilation stretches or.
Image Representation and Description – Representation Schemes
Aim To be able to describe how a shape has been translated. Success Criteria To know what translation of a shape means. To know how to describe how a shape.
IT472: Digital Image Processing
Computer and Robot Vision I
Digital Image Processing Lecture 20: Representation & Description
Distributed Forces: Moments of Inertia
Mean Shift Segmentation
Reconstructing Shredded Documents
OBJECT RECOGNITION – BLOB ANALYSIS
4.1: Congruence and Transformation
Computer and Robot Vision I
Geometry PreAP, Revised ©2013 1–7 and 12–1: Transformations
Reflections in Coordinate Plane
Digital Image Processing Lecture 21: Principal Components for Description Prof. Charlene Tsai *Chapter 11.4 of Gonzalez.
Representation and Description
Fourier Transform of Boundaries
Unit 6 Day 1.
Presentation transcript:

COMP322/S2000/L201 Recognition: Object Descriptor Example: A binary image, object is indicated by one’s Run Length encoding of the image: (0,11,3,5,5,4,5,5,5,4,4,6.2,13) Question: How to describe the object? Boundary Chain code of the object: starting pixel: (2,1) (0,0,7,6,7,6,5,5,4,3,2,2,3,2,1)

COMP322/S2000/L202 Recognition: Object Descriptor “Object” in a binary image is indicated by “1”. Moments The moments of an object (O) are defined as where (x,y) is the coordinates of a pixel in O. Consider, 1. k = 0; j = 0 ==> ==> Size of O, i.e. no. of pixels with value of 1 m 00 = 29 (example)

COMP322/S2000/L203 Recognition: Object Descriptor 2. k = 1; j = 0; ==> m 10 = 100 (example) 3. k = 0; j = 1; ==> m 01 = 111 (example) Centroid (Center of Mass) (x c,y c ) = (100/29, 111/29) = (3.45, 3.83) (example)

COMP322/S2000/L204 Recognition: Object Descriptor Central Moments The central moments of an object (O) are defined as where (x,y) is the coordinates of a pixel in O and (x c,y c ) is the centroid of the object. Note: Invariant to translation of object. Q: what about Rotation? If the origin is translated to the centroid, the central moments become the standard moments  11 is called a product moment  20,  02 are moments of inertia of the object w.r.t. the x and y axes through the centroid.

COMP322/S2000/L205 Recognition:Object Descriptor (x) m 00 = 9; m 10 = 36; m 01 = 27; (x c,y c ) = (36/9, 27/9) = (4,3);  10 = 0;  01 = 0;  11 = 12;  20 = 30; m 02 = 6; Image Translated: m 00 = 9; m 10 = 36; m 01 = 18; (x c,y c ) = (36/9, 18/9) = (4,2);  10 = 0;  01 = 0;  11 = 12;  20 = 30; m 02 = 6;

COMP322/S2000/L206 Recognition: Object Descriptor Orientation of the Object: The principle angle of an object (O) is defined as where atan2(y,x) is defined by the following table: casequandrantatan2(y,x) x > 0 1, 4 arc tan(y/x) x = 0 1, 4[sign(y)]  /2 x < 0 2, 3arc tan(y/x) + sign(y)  Example given in class.

COMP322/S2000/L207 Recognition: Object Description For a robot to grip a part, the exact shape of the part need not be known sometimes. If we know the centroid of the part, the principle axes, and the bounding boxes of the part, the robot should be able to pick up the part. Bounding Rectangle is defined as the smallest rectangle that encloses the object and is aligned with the object’s orientation. Refer to class notes for details and examples