Texture Readings: Ch 7: all of it plus Carson paper

Slides:



Advertisements
Similar presentations
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Presented By: Vennela Sunnam
Carolina Galleguillos, Brian McFee, Serge Belongie, Gert Lanckriet Computer Science and Engineering Department Electrical and Computer Engineering Department.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
1 Color and Texture Ch 6 and 7 of Shapiro How do we quantify them? How do we use them to segment an image?
Computer Vision Lecture 16: Texture
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Intensity Transformations (Chapter 3)
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 6 Color and Texture
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Texture Turk, 91.
Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
CS 376b Introduction to Computer Vision 03 / 26 / 2008 Instructor: Michael Eckmann.
Segmentation Divide the image into segments. Each segment:
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Copyright © 2012 Elsevier Inc. All rights reserved.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Machine Vision for Robots
8D040 Basis beeldverwerking Feature Extraction Anna Vilanova i Bartrolí Biomedical Image Analysis Group bmia.bmt.tue.nl.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Texture scale and image segmentation using wavelet filters Stability of the features Through the study of stability of the eigenvectors and the eigenvalues.
EECS 274 Computer Vision Segmentation by Clustering II.
CS 376b Introduction to Computer Vision 03 / 21 / 2008 Instructor: Michael Eckmann.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
CS654: Digital Image Analysis
Slides from Dr. Shahera Hossain
 In the previews parts we have seen some kind of segmentation method.  In this lecture we will see graph cut, which is a another segmentation method.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
1 Color and Texture How do we quantify them? How do we use them to segment an image?
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Image Quality Measures Omar Javed, Sohaib Khan Dr. Mubarak Shah.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Another Example: Circle Detection
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Texture.
COMP 9517 Computer Vision Texture 6/22/2018 COMP 9517 S2, 2009.
Mean Shift Segmentation
Feature description and matching
Computer Vision Lecture 16: Texture II
Texture.
COMPUTER VISION Introduction
Blobworld Texture Features
Fourier Transform of Boundaries
Feature descriptors and matching
Presentation transcript:

Texture Readings: Ch 7: all of it plus Carson paper Structural vs. Statistical Approaches Edge-Based Measures Local Binary Patterns Co-occurence Matrices Laws Filters & Gabor Filters Blobworld Texture Features that select scale

Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

Problem with Structural Approach How do you decide what is a texel? Ideas?

Natural Textures from VisTex grass leaves What/Where are the texels?

The Case for Statistical Texture Segmenting out texels is difficult or impossible in real images. Numeric quantities or statistics that describe a texture can be computed from the gray tones (or colors) alone. This approach is less intuitive, but is computationally efficient. It can be used for both classification and segmentation.

Some Simple Statistical Texture Measures 1. Edge Density and Direction Use an edge detector as the first step in texture analysis. The number of edge pixels in a fixed-size region tells us how busy that region is. The directions of the edges also help characterize the texture

Two Edge-based Texture Measures 1. edgeness per unit area 2. edge magnitude and direction histograms Fedgeness = |{ p | gradient_magnitude(p)  threshold}| / N where N is the size of the unit area Fmagdir = ( Hmagnitude, Hdirection ) where these are the normalized histograms of gradient magnitudes and gradient directions, respectively.

Example Original Image Frei-Chen Thresholded Edge Image Edge Image

Local Binary Pattern Measure For each pixel p, create an 8-bit number b1 b2 b3 b4 b5 b6 b7 b8, where bi = 0 if neighbor i has value less than or equal to p’s value and 1 otherwise. Represent the texture in the image (or a region) by the histogram of these numbers. 1 2 3 100 101 103 40 50 80 50 60 90 4 5 8 1 1 1 1 1 1 0 0 7 6

Example Fids (Flexible Image Database System) is retrieving images similar to the query image using LBP texture as the texture measure and comparing their LBP histograms

Example Low-level measures don’t always find semantically similar images.

Co-occurrence Matrix Features A co-occurrence matrix is a 2D array C in which Both the rows and columns represent a set of possible image values. C (i,j) indicates how many times value i co-occurs with value j in a particular spatial relationship d. The spatial relationship is specified by a vector d = (dr,dc). d

Co-occurrence Example 1 0 1 2 1 1 0 0 0 0 2 2 i j 1 2 1 0 3 2 0 2 0 0 1 3 Cd co-occurrence matrix d = (3,1) gray-tone image From Cd we can compute Nd, the normalized co-occurrence matrix, where each value is divided by the sum of all the values.

Co-occurrence Features What do these measure? sums. Energy measures uniformity of the normalized matrix.

But how do you choose d? This is actually a critical question with all the statistical texture methods. Are the “texels” tiny, medium, large, all three …? Not really a solved problem. Zucker and Terzopoulos suggested using a 2 statistical test to select the value(s) of d that have the most structure for a given class of images.

Example

Laws’ Texture Energy Features Signal-processing-based algorithms use texture filters applied to the image to create filtered images from which texture features are computed. The Laws Algorithm Filter the input image using texture filters. Compute texture energy by summing the absolute value of filtering results in local neighborhoods around each pixel. Combine features to achieve rotational invariance.

Law’s texture masks (1)

Law’s texture masks (2) Creation of 2D Masks E5 L5 E5L5

9D feature vector for pixel Subtract mean neighborhood intensity from (center) pixel Apply 16 5x5 masks to get 16 filtered images Fk , k=1 to 16 Produce 16 texture energy maps using 15x15 windows Ek[r,c] = ∑ |Fk[i,j]| Replace each distinct pair with its average map: 9 features (9 filtered images) defined as follows:

Laws Filters

Laws Process

Example: Using Laws Features to Cluster water tiger fence flag grass Is there a neighborhood size problem with Laws? small flowers big flowers

Features from sample images

Gabor Filters Similar approach to Laws Wavelets at different frequencies and different orientations

Gabor Filters

Gabor Filters

Segmentation with Color and Gabor-Filter Texture (Smeulders)

Blobworld Texture Features Choose the best scale instead of using fixed scale(s) Used successfully in color/texture segmentation in Berkeley’s Blobworld project

Feature Extraction Algorithm: Select an appropriate scale for each pixel and extract features for that pixel at the selected scale Pixel Features Polarity Anisotropy Texture contrast feature extraction Original image

Texture Scale Texture is a local neighborhood property. Texture features computed at a wrong scale can lead to confusion. Texture features should be computed at a scale which is appropriate to the local structure being described. The white rectangles show some sample texture scales from the image.

Scale Selection Terminology Gradient of the L* component (assuming that the image is in the L*a*b* color space) :▼I = Gaussian filter : Gσ (x, y) of size  Second moment matrix: Mσ (x, y)= Gσ (x, y) * (▼I)(▼I)T Ix Iy Ix2 IxIy IxIy Iy2 original Ix Ix2 G*Ix2 Note: σ controls the size of the window around each pixel [1 2 5 10 17 26 37 50].

Computing Second Moment Matrix M σ First compute 3 separate images for Ix2 Iy2 IxIy 2. Then apply a Gaussian filter to each of these images. 3. Then M σ(i,j) is computed from Ix2(i,j), Iy2(i,j), and IxIy(i,j).

Scale Selection (continued) Make use of polarity (a measure of the extent to which the gradient vectors in a certain neighborhood all point in the same direction) to select the scale at which Mσ is computed Edge: polarity is close to 1 for all scales σ Texture: polarity varies with σ Uniform: polarity takes on arbitrary values

Scale Selection (continued) polarity p n is a unit vector perpendicular to the dominant orientation. The notation [x]+ means x if x > 0 else 0 The notation [x]- means x if x < 0 else 0 We can think of E+ and E- as measures of how many gradient vectors in the window are on the positive side and how many are on the negative side of the dominant orientation in the window. Example: n=[1 1] x = [1 .6] x’ = [-1 -.6]

Scale Selection (continued) Texture scale selection is based on the derivative of the polarity with respect to scale σ. Algorithm: Compute polarity at every pixel in the image for σk = k/2, (k = 0,1…7). 2. Convolve each polarity image with a Gaussian with standard deviation 2k to obtain a smoothed polarity image. 3. For each pixel, the selected scale is the first value of σ for which the difference between values of polarity at successive scales is less than 2 percent.

Texture Features Extraction Extract the texture features at the selected scale Polarity (polarity at the selected scale) : p = pσ* Anisotropy : a = 1 – λ2 / λ1 λ1 and λ2 denote the eigenvalues of Mσ λ2 / λ1 measures the degree of orientation: when λ1 is large compared to λ2 the local neighborhood possesses a dominant orientation. When they are close, no dominant orientation. When they are small, the local neighborhood is constant. Local Contrast: C = 2(λ1+λ2)3/2 A pixel is considered homogeneous if λ1+λ2 < a local threshold

Blobworld Segmentation Using Color and Texture

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

References Chad Carson, Serge Belongie, Hayit Greenspan, and Jitendra Malik. "Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying." IEEE Transactions on Pattern Analysis and Machine Intelligence 2002; Vol 24. pp. 1026-38. W. Forstner, “A Framework for Low Level Feature Extraction,” Proc. European Conf. Computer Vision, pp. 383-394, 1994.