Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:

Slides:



Advertisements
Similar presentations
Department of Computer Engineering
Advertisements

CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Spatial Filtering (Chapter 3)
Interest points CSE P 576 Ali Farhadi Many slides from Steve Seitz, Larry Zitnick.
Presented By: Vennela Sunnam
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
嵌入式視覺 Feature Extraction
1 Color and Texture Ch 6 and 7 of Shapiro How do we quantify them? How do we use them to segment an image?
Computer Vision Lecture 16: Texture
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Lecture 6 Color and Texture
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
1 Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach:
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Texture Turk, 91.
CS 376b Introduction to Computer Vision 03 / 26 / 2008 Instructor: Michael Eckmann.
Segmentation Divide the image into segments. Each segment:
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
E.G.M. PetrakisTexture1 Repeative patterns of local variations of intensity on a surface –texture pattern: texel Texels: similar shape, intensity distribution.
Texture Readings: Ch 7: all of it plus Carson paper
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Copyright © 2012 Elsevier Inc. All rights reserved.
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
Machine Vision for Robots
Local invariant features Cordelia Schmid INRIA, Grenoble.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
EECS 274 Computer Vision Segmentation by Clustering II.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Edges.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
CS654: Digital Image Analysis
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Instructor: Mircea Nicolescu Lecture 7
1 Color and Texture How do we quantify them? How do we use them to segment an image?
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
- photometric aspects of image formation gray level images
CS262: Computer Vision Lect 09: SIFT Descriptors
Interest Points EE/CSE 576 Linda Shapiro.
Texture.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
COMP 9517 Computer Vision Texture 6/22/2018 COMP 9517 S2, 2009.
Computer Vision Lecture 16: Texture II
Texture.
COMPUTER VISION Introduction
Blobworld Texture Features
Fourier Transform of Boundaries
Presentation transcript:

Texture Texture is a description of the spatial arrangement of color or intensities in an image or a selected region of an image. Structural approach: a set of texels in some regular or repeated pattern

Problem with Structural Approach How do you decide what is a texel? Ideas?

Natural Textures from VisTex grassleaves What/Where are the texels?

The Case for Statistical Texture Segmenting out texels is difficult or impossible in real images. Numeric quantities or statistics that describe a texture can be computed from the gray tones (or colors) alone. This approach is less intuitive, but is computationally efficient. It can be used for both classification and segmentation.

Some Simple Statistical Texture Measures 1. Edge Density and Direction Use an edge detector as the first step in texture analysis. The number of edge pixels in a fixed-size region tells us how busy that region is. The directions of the edges also help characterize the texture

Two Edge-based Texture Measures 1. edgeness per unit area 2. edge magnitude and direction histograms F edgeness = |{ p | gradient_magnitude(p)  threshold}| / N where N is the size of the unit area F magdir = ( H magnitude, H direction ) where these are the normalized histograms of gradient magnitudes and gradient directions, respectively.

Original Image Frei-Chen Thresholded Edge Image Edge Image Example

Local Binary Pattern Measure For each pixel p, create an 8-bit number b 1 b 2 b 3 b 4 b 5 b 6 b 7 b 8, where b i = 0 if neighbor i has value less than or equal to p’s value and 1 otherwise. Represent the texture in the image (or a region) by the histogram of these numbers

Fids (Flexible Image Database System) is retrieving images similar to the query image using LBP texture as the texture measure and comparing their LBP histograms Example

Low-level measures don’t always find semantically similar images. Example

Co-occurrence Matrix Features A co-occurrence matrix is a 2D array C in which Both the rows and columns represent a set of possible image values. C (i,j) indicates how many times value i co-occurs with value j in a particular spatial relationship d. The spatial relationship is specified by a vector d = (dr,dc). d

j i 1 3 d = (3,1) CdCd gray-tone image co-occurrence matrix From C d we can compute N d, the normalized co-occurrence matrix, where each value is divided by the sum of all the values. Co-occurrence Example

Co-occurrence Features sums. What do these measure? Energy measures uniformity of the normalized matrix.

But how do you choose d? This is actually a critical question with all the statistical texture methods. Are the “texels” tiny, medium, large, all three …? Not really a solved problem. Zucker and Terzopoulos suggested using a  2 statistical test to select the value(s) of d that have the most structure for a given class of images.

Example

Laws’ Texture Energy Features Signal-processing-based algorithms use texture filters applied to the image to create filtered images from which texture features are computed. The Laws Algorithm Filter the input image using texture filters. Compute texture energy by summing the absolute value of filtering results in local neighborhoods around each pixel. Combine features to achieve rotational invariance.

Law’s texture masks (1)

Law’s texture masks (2) Creation of 2D Masks E5 L5 E5L5

9D feature vector for pixel Subtract mean neighborhood intensity from (center) pixel Dot product 16 5x5 masks with neighborhood 9 features defined as follows:

Laws Filters

Laws Process

water tiger fence flag grass small flowers big flowers Is there a neighborhood size problem with Laws? Example: Using Laws Features to Cluster

Features from sample images

Gabor Filters Similar approach to Laws Wavelets at different frequencies and different orientations

Gabor Filters

Segmentation with Color and Gabor- Filter Texture (Smeulders)

A classical texture measure: Autocorrelation function Autocorrelation function can detect repetitive patterns of texels Also defines fineness/coarseness of the texture Compare the dot product (energy) of non shifted image with a shifted image

Interpreting autocorrelation Coarse texture  function drops off slowly Fine texture  function drops off rapidly Can drop differently for r and c Regular textures  function will have peaks and valleys; peaks can repeat far away from [0, 0] Random textures  only peak at [0, 0]; breadth of peak gives the size of the texture

Fourier power spectrum High frequency power  fine texture Concentrated power  regularity Directionality  directional texture

Blobworld Texture Features Choose the best scale instead of using fixed scale(s) Used successfully in color/texture segmentation in Berkeley’s Blobworld project

Feature Extraction Input: image Output: pixel features –Color features –Texture features –Position features Algorithm: Select an appropriate scale for each pixel and extract features for that pixel at the selected scale Pixel Features Polarity Anisotropy Texture contrast feature extraction Original image

Texture Scale Texture is a local neighborhood property. Texture features computed at a wrong scale can lead to confusion. Texture features should be computed at a scale which is appropriate to the local structure being described. The white rectangles show some sample texture scales from the image.

Scale Selection Terminology Gradient of the L* component (assuming that the image is in the L*a*b* color space) :▼I Symmetric Gaussian : G σ (x, y) = G σ (x) * G σ (y) Second moment matrix: M σ (x, y)= G σ (x, y) * (▼I)(▼I) T Notes: G σ (x, y) is a separable approximation to a Gaussian. σ is the standard deviation of the Gaussian [0,.5, … 3.5]. σ controls the size of the window around each pixel [ ]. M σ (x,y) is a 2X2 matrix and is computed at different scales defined by σ. I x 2 I x I y I x I y I y 2 IxIyIxIy

Scale Selection (continued) Make use of polarity (a measure of the extent to which the gradient vectors in a certain neighborhood all point in the same direction) to select the scale at which M σ is computed Edge: polarity is close to 1 for all scales σ Texture: polarity varies with σ Uniform: polarity takes on arbitrary values

Scale Selection (continued) n is a unit vector perpendicular to the dominant orientation. The notation [x]+ means x if x > 0 else 0 The notation [x]- means x if x < 0 else 0 We can think of E + and E - as measures of how many gradient vectors in the window are on the positive side and how many are on the negative side of the dominant orientation in the window. n=[1 1] x = [1.6] x’ = [-1 -.6] Example: polarity p 

Scale Selection (continued) Texture scale selection is based on the derivative of the polarity with respect to scale σ. Algorithm: 1.Compute polarity at every pixel in the image for σ k = k/2, (k = 0,1…7). 2. Convolve each polarity image with a Gaussian with standard deviation 2k to obtain a smoothed polarity image. 3. For each pixel, the selected scale is the first value of σ for which the difference between values of polarity at successive scales is less than 2 percent.

Texture Features Extraction Extract the texture features at the selected scale –Polarity (polarity at the selected scale) : p = p σ* –Anisotropy : a = 1 – λ 2 / λ 1 λ 1 and λ 2 denote the eigenvalues of M σ λ 2 / λ 1 measures the degree of orientation: when λ 1 is large compared to λ 2 the local neighborhood possesses a dominant orientation. When they are close, no dominant orientation. When they are small, the local neighborhood is constant. –Local Contrast: C = 2(λ 1 +λ 2 ) 3/2 A pixel is considered homogeneous if λ1+λ2 < a local threshold

Blobworld Segmentation Using Color and Texture

Application to Protein Crystal Images Original image in PGM (Portable Gray Map ) format K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures

Application to Protein Crystal Images K-mean clustering result (number of clusters is equal to 10 and similarity measure is Euclidean distance) Different colors represent different textures Original image in PGM (Portable Gray Map ) format

References Chad Carson, Serge Belongie, Hayit Greenspan, and Jitendra Malik. "Blobworld: Image Segmentation Using Expectation-Maximization and Its Application to Image Querying." IEEE Transactions on Pattern Analysis and Machine Intelligence 2002; Vol 24. pp W. Forstner, “A Framework for Low Level Feature Extraction,” Proc. European Conf. Computer Vision, pp , 1994.