What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt.

Slides:



Advertisements
Similar presentations
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Advertisements

Multiscale Analysis of Images Gilad Lerman Math 5467 (stealing slides from Gonzalez & Woods, and Efros)
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)
FATIH CAKIR MELIHCAN TURK F. SUKRU TORUN AHMET CAGRI SIMSEK Content-Based Image Retrieval using the Bag-of-Words Concept.
Outline SIFT Background SIFT Extraction Application in Content Based Image Search Conclusion.
Marco Cristani Teorie e Tecniche del Riconoscimento1 Teoria e Tecniche del Riconoscimento Estrazione delle feature: Bag of words Facoltà di Scienze MM.
TP14 - Indexing local features
Large-Scale Image Retrieval From Your Sketches Daniel Brooks 1,Loren Lin 2,Yijuan Lu 1 1 Department of Computer Science, Texas State University, TX, USA.
CS 691 Computational Photography
Pyramids and Texture. Scaled representations Big bars and little bars are both interesting Spots and hands vs. stripes and hairs Inefficient to detect.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
1 Image Retrieval Hao Jiang Computer Science Department 2009.
Templates, Image Pyramids, and Filter Banks Slides largely from Derek Hoeim, Univ. of Illinois.
Object Recognition. So what does object recognition involve?
Bag-of-features models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Lecture 28: Bag-of-words models
Agenda Introduction Bag-of-words model Visual words with spatial location Part-based models Discriminative methods Segmentation and recognition Recognition-based.
A Study of Approaches for Object Recognition
“Bag of Words”: when is object recognition, just texture recognition? : Advanced Machine Perception A. Efros, CMU, Spring 2009 Adopted from Fei-Fei.
Front-end computations in human vision Jitendra Malik U.C. Berkeley References: DeValois & DeValois,Hubel, Palmer, Spillman &Werner, Wandell Jitendra Malik.
Object Recognition. So what does object recognition involve?
Bag-of-features models
TEXTURE SYNTHESIS PEI YEAN LEE. What is texture? Images containing repeating patterns Local & stationary.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
“Bag of Words”: recognition using texture : Advanced Machine Perception A. Efros, CMU, Spring 2006 Adopted from Fei-Fei Li, with some slides from.
Image Pyramids and Blending
Review: Intro to recognition Recognition tasks Machine learning approach: training, testing, generalization Example classifiers Nearest neighbor Linear.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Image Representation Gaussian pyramids Laplacian Pyramids
Multiclass object recognition
Applications of Image Filters Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/04/10.
Computer vision.
CSE 473/573 Computer Vision and Image Processing (CVIP)
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Lecture 6: Edge Detection CAP 5415: Computer Vision Fall 2008.
Texture Key issue: representing texture –Texture based matching little is known –Texture segmentation key issue: representing texture –Texture synthesis.
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
Templates, Image Pyramids, and Filter Banks
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
Local features: detection and description
CS654: Digital Image Analysis
Texture Analysis and Synthesis. Texture Texture: pattern that “looks the same” at all locationsTexture: pattern that “looks the same” at all locations.
Recall: Gaussian smoothing/sampling G 1/4 G 1/8 Gaussian 1/2 Solution: filter the image, then subsample Filter size should double for each ½ size reduction.
Another Example: Circle Detection
Lecture IX: Object Recognition (2)
- photometric aspects of image formation gray level images
CS 2770: Computer Vision Feature Matching and Indexing
TP12 - Local features: detection and description
Image Sampling Moire patterns
Feature description and matching
Image Sampling Moire patterns
CS 1674: Intro to Computer Vision Scene Recognition
Brief Review of Recognition + Context
Texture.
Filtering Part 2: Image Sampling
Image Sampling Moire patterns
SIFT keypoint detection
Grouping/Segmentation
Resampling.
Feature descriptors and matching
Part 1: Bag-of-words models
Some slides: courtesy of David Jacobs
Presentation transcript:

What is Texture? Texture depicts spatially repeating patterns Many natural phenomena are textures radishesrocksyogurt

Texton Discrimination (Julesz) Human vision is sensitive to the difference of some types of elements and appears to be “numb” on other types of differences.

Search Experiment I The subject is told to detect a target element in a number of background elements. In this example, the detection time is independent of the number of background elements.

Search Experiment II In this example, the detection time is proportional to the number of background elements, And thus suggests that the subject is doing element-by-element scrutiny.

Heuristic (Axiom) I Julesz then conjectured the following axiom: Human vision operates in two distinct modes: 1. Preattentive vision parallel, instantaneous (~ ms ), without scrutiny, independent of the number of patterns, covering a large visual field. 2. Attentive vision serial search by focal attention in 50ms steps limited to small aperture. Then what are the basic elements?

Heuristic (Axiom) II Julesz’s second heuristic answers this question: Textons are the fundamental elements in preattentive vision, including 1. Elongated blobs rectangles, ellipses, line segments with attributes color, orientation, width, length, flicker rate. 2. Terminators ends of line segments. 3. Crossings of line segments. But it is worth noting that Julesz’s conclusions are largely based by ensemble of artificial texture patterns. It was infeasible to synthesize natural textures for controlled experiments at that time.

Examples Pre-attentive vision is sensitive to size/width, orientation changes

Examples Sensitive to number of terminators Left: fore-back Right: back-fore See previous examples For cross and terminators

Julesz Conjecture Textures cannot be spontaneously discriminated if they have the same first-order and second-order statistics and differ only in their third-order or higher-order statistics. (later proved wrong)

1 st Order Statistics 5% white20% white

2 nd Order Statistics 10% white

Capturing the “essence” of texture …for real images We don’t want an actual texture realization, we want a texture invariant What are the tools for capturing statistical properties of some signal?

Multi-scale filter decomposition Filter bank Input image

Filter response histograms

Heeger & Bergen ‘95 Start with a noise image as output Main loop: Match pixel histogram of output image to input Decompose input and output images using multi-scale filter bank (Steerable Pyramid) Match subband histograms of input and output pyramids Reconstruct input and output images (collapse the pyramids)

Image Histograms Cumulative Histograms s = T(r)

Histogram Equalization

Histogram Matching

Match-histogram code

Image Pyramids Known as a Gaussian Pyramid [Burt and Adelson, 1983] In computer graphics, a mip map [Williams, 1983] A precursor to wavelet transform

Band-pass filtering Laplacian Pyramid (subband images) Created from Gaussian pyramid by subtraction Gaussian Pyramid (low-pass images)

Laplacian Pyramid How can we reconstruct (collapse) this pyramid into the original image? Need this! Original image

Steerable Pyramid Input image 7 filters used:

Heeger & Bergen ‘95 Start with a noise image as output Main loop: Match pixel histogram of output image to input Decompose input and output images using multi-scale filter bank (Steerable Pyramid) Match subband histograms of input and output pyramids Reconstruct input and output images (collapse the pyramids)

Simoncelli & Portilla ’98+ Marginal statistics are not enough Neighboring filter responses are highly correlated an edge at low-res will cause an edge at high-res Let’s match 2 nd order statistics too!

Simoncelli & Portilla ’98+ Match joint histograms of pairs of filter responses at adjacent spatial locations, orientations, and scales. Optimize using repeated projections onto statistical constraint sufraces

Texture for object recognition A “jet”

Object Bag of ‘words’

Analogy to documents Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step- wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image. sensory, brain, visual, perception, retinal, cerebral cortex, eye, cell, optical nerve, image Hubel, Wiesel China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value. China, trade, surplus, commerce, exports, imports, US, yuan, bank, domestic, foreign, increase, trade, value

categorydecisionlearning feature detection & representation codewords dictionary image representation category models (and/or) classifiers recognition

1.Feature detection and representation

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al. 2005

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al Interest point detector –Csurka et al –Fei-Fei et al –Sivic et al. 2005

Feature detection Sliding Window –Leung et al, 1999 –Viola et al, 1999 –Renninger et al 2002 Regular grid –Vogel et al –Fei-Fei et al Interest point detector –Csurka et al –Fei-Fei et al –Sivic et al Other methods –Random sampling (Ullman et al. 2002) –Segmentation based patches Barnard et al. 2003, Russell et al 2006, etc.)

Feature Representation Visual words, aka textons, aka keypoints: K-means clustered pieces of the image Various Representations: –Filter bank responses –Image Patches –SIFT descriptors All encode more-or-less the same thing…

Interest Point Features Normalize patch Detect patches [Mikojaczyk and Schmid ’02] [Matas et al. ’02] [Sivic et al. ’03] Compute SIFT descriptor [Lowe’99] Slide credit: Josef Sivic

… Interest Point Features

… Patch Features

dictionary formation …

Clustering (usually k-means) Vector quantization … Slide credit: Josef Sivic

Clustered Image Patches Fei-Fei et al. 2005

Filterbank

Textons (Malik et al, IJCV 2001) K-means on vectors of filter responses

Textons (cont.)

Image patch examples of codewords Sivic et al. 2005

Visual Polysemy. Single visual word occurring on different (but locally similar) parts on different object categories. Visual Synonyms. Two different visual words representing a similar part of an object (wheel of a motorbike). Visual synonyms and polysemy

Image representation ….. frequency codewords

Vision Science & Computer Vision Groups University of California Berkeley Scene Classification (Renninger & Malik) kitchenlivingroombedroombathroom citystreetfarm beachmountainforest

Vision Science & Computer Vision Groups University of California Berkeley kNN Texton Matching

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 37 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 50 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 69 ms

Vision Science & Computer Vision Groups University of California Berkeley Discrimination of Basic Categories texture model chance 37 ms50 ms69 ms

Object Recognition using texture

Learn texture model representation: Textons (rotation-variant) Clustering K=2000 Then clever merging Then fitting histogram with Gaussian Training Labeled class data