Blobworld Texture Features

Slides:



Advertisements
Similar presentations
Evaluating Color Descriptors for Object and Scene Recognition Koen E.A. van de Sande, Student Member, IEEE, Theo Gevers, Member, IEEE, and Cees G.M. Snoek,
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Spatial Filtering (Chapter 3)
Presented By: Vennela Sunnam
嵌入式視覺 Feature Extraction
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Mean Shift A Robust Approach to Feature Space Analysis Kalyan Sunkavalli 04/29/2008 ES251R.
Motion Analysis Slides are from RPI Registration Class.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 790Q Biometrics Face Recognition Using Dimensionality Reduction PCA and LDA M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Texture Readings: Ch 7: all of it plus Carson paper
Lecture 2: Image filtering
Heather Dunlop : Advanced Perception January 25, 2006
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
New Segmentation Methods Advisor : 丁建均 Jian-Jiun Ding Presenter : 蔡佳豪 Chia-Hao Tsai Date: Digital Image and Signal Processing Lab Graduate Institute.
Machine Vision for Robots
Image recognition using analysis of the frequency domain features 1.
Mean-shift and its application for object tracking
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
HCI/ComS 575X: Computational Perception Instructor: Alexander Stoytchev
Texture scale and image segmentation using wavelet filters Stability of the features Through the study of stability of the eigenvectors and the eigenvalues.
Image Enhancement [DVT final project]
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
November 30, PATTERN RECOGNITION. November 30, TEXTURE CLASSIFICATION PROJECT Characterize each texture so as to differentiate it from one.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
By Pushpita Biswas Under the guidance of Prof. S.Mukhopadhyay and Prof. P.K.Biswas.
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Edge Segmentation in Computer Images CSE350/ Sep 03.
Instructor: Mircea Nicolescu Lecture 7
Performance Measurement of Image Processing Algorithms By Dr. Rajeev Srivastava ITBHU, Varanasi.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Finding Clusters within a Class to Improve Classification Accuracy Literature Survey Yong Jae Lee 3/6/08.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Normalized Cuts and Image Segmentation Patrick Denis COSC 6121 York University Jianbo Shi and Jitendra Malik.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Course Introduction to Medical Imaging Segmentation 1 – Mean Shift and Graph-Cuts Guy Gilboa.
Edge Detection Images and slides from: James Hayes, Brown University, Computer Vision course Svetlana Lazebnik, University of North Carolina at Chapel.
Graph-based Segmentation
Visual homing using PCA-SIFT
- photometric aspects of image formation gray level images
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
By Pradeep C.Venkat Srinath Srinivasan
IMAGE PROCESSING AKSHAY P S3 EC ROLL NO. 9.
Detecting Artifacts and Textures in Wavelet Coded Images
Texture Classification of Normal Tissues in Computed Tomography
ECE 692 – Advanced Topics in Computer Vision
Computer Vision Lecture 16: Texture II
Fall 2012 Longin Jan Latecki
What I learned in the first 2 weeks
Image Segmentation Image analysis: First step:
Computer and Robot Vision I
Digital Image Processing
Enhancing the Enlargement of Images
Edge Detection in Computer Vision
Color Image Retrieval based on Primitives of Color Moments
Presentation transcript:

Blobworld Texture Features Prepared by Dong-Hui Xu DePaul University Chicago

Feature Extraction Input: image Output: pixel features Color features Texture features Position features Algorithm: Select an appropriate scale for each pixel and extract features for that pixel at the selected scale feature extraction Pixel Features Polarity Anisotropy Texture contrast Original image

Texture Scale Texture is a local neighborhood property. Texture features computed at a wrong scale would lead to confusion. Texture features should be computed at a scale which is appropriate to the local structure being described. The white rectangles show some sample texture scales from the image.

Scale Selection Terminology Gradient of the L* component (assuming that the image is in the L*a*b* color space) :▼I Symmetric Gaussian : Gσ (x, y) = Gσ (x) * Gσ (y) Second moment matrix: Mσ (x, y)= Gσ (x, y) * (▼I)(▼I)T Notes: Gσ (x, y) is a separable approximation to a Gaussian. σ is the standard deviation of the Gaussian [0, .5, … 3.5]. σ controls the size of the window around each pixel [1 2 5 10 17 26 37 50]. Mσ(x,y) is a 2X2 matrix and is computed at different scales defined by σ.

Scale Selection (continued) Make use of polarity (a measure of the extent to which the gradient vectors in a certain neighborhood all point in the same direction) to select the scale at which Mσ is computed Edge: polarity is close to 1 for all scales σ Texture: polarity varies with σ Uniform: polarity takes on arbitrary values

Scale Selection (continued) n is a unit vector perpendicular to the dominant orientation. The notation [x]+ means x if x > 0 else 0 The notation [x]- means x if x < 0 else 0 We can think of E and E as measures of how many gradient vectors in the window are on the positive side and how many are on the negative side of the dominant orientation in the window. + - Example: n=[1 1] x = [1 .6] x’ = [-1 -.6]

Scale Selection (continued) Texture scale selection is based on the derivative of the polarity with respect to scale σ. Algorithm: Compute polarity at every pixel in the image for σk = k/2, (k = 0,1…7). 2. Convolve each polarity image with a Gaussian with standard deviation 2k to obtain a smoothed polarity image. 3. For each pixel, the selected scale is the first value of σ for which the difference between values of polarity at successive scales is less than 2 percent.

Texture Features Extraction Extract the texture features at the selected scale Polarity (polarity at the selected scale) : p = pσ* Anisotropy : a = 1 – λ2 / λ1 λ1 and λ2 denote the eigenvalues of Mσ λ2 / λ1 measures the degree of orientation: when λ1 is large compared to λ2 the local neighborhood possesses a dominant orientation. When they are close, no dominant orientation. When they are small, the local neighborhood is constant. Local Contrast: C = 2(λ1+λ2)3/2 A pixel is considered homogeneous if λ1+λ2 < a local threshold

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

Application to Outdoor Objects K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in JPEG (Joint Photographic Experts Group ) format

Application to Outdoor Objects Original image in JPEG (Joint Photographic Experts Group ) format K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures

References Chad Carson, Serge Belongie, Hayit Greenspan, and Jitendra Malik. "Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying." IEEE Transactions on Pattern Analysis and Machine Intelligence 2002; Vol 24. pp. 1026-38. W. Forstner, “A Framework for Low Level Feature Extraction,” Proc. European Conf. Computer Vision, pp. 383-394, 1994.