INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Independent Component Analysis
Multiclass SVM and Applications in Object Classification
Texture. Limitation of pixel based processing Edge detection with different threshold.
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Pyramid-Based Texture Analysis / Synthesis
Chapter 4: Linear Models for Classification
2008 SIAM Conference on Imaging Science July 7, 2008 Jason A. Palmer
CHAPTER 16 MARKOV CHAIN MONTE CARLO
Recovering Intrinsic Images from a Single Image 28/12/05 Dagan Aviv Shadows Removal Seminar.
Texture Turk, 91.
1 On the Statistical Analysis of Dirty Pictures Julian Besag.
A Study of Approaches for Object Recognition
Dimensional reduction, PCA
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
TEXTURE SYNTHESIS PEI YEAN LEE. What is texture? Images containing repeating patterns Local & stationary.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Independent Component Analysis (ICA) and Factor Analysis (FA)
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Texture Recognition and Synthesis A Non-parametric Multi-Scale Statistical Model by De Bonet & Viola Artificial Intelligence Lab MIT Presentation by Pooja.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
ICA Alphan Altinok. Outline  PCA  ICA  Foundation  Ambiguities  Algorithms  Examples  Papers.
Object Detection Using the Statistics of Parts Henry Schneiderman Takeo Kanade Presented by : Sameer Shirdhonkar December 11, 2003.
PhD Thesis. Biometrics Science studying measurements and statistics of biological data Most relevant application: id. recognition 2.
(1) A probability model respecting those covariance observations: Gaussian Maximum entropy probability distribution for a given covariance observation.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Image Representation Gaussian pyramids Laplacian Pyramids
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Computer vision.
CSE 185 Introduction to Computer Vision Pattern Recognition.
The content of these slides by John Galeotti, © Carnegie Mellon University (CMU), was made possible in part by NIH NLM contract# HHSN P,
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
City University of Hong Kong 18 th Intl. Conf. Pattern Recognition Self-Validated and Spatially Coherent Clustering with NS-MRF and Graph Cuts Wei Feng.
黃文中 Introduction The Model Results Conclusion 2.
10/24/2015 Content-Based Image Retrieval: Feature Extraction Algorithms EE-381K-14: Multi-Dimensional Digital Signal Processing BY:Michele Saad
Using Support Vector Machines to Enhance the Performance of Bayesian Face Recognition IEEE Transaction on Information Forensics and Security Zhifeng Li,
A Two-level Pose Estimation Framework Using Majority Voting of Gabor Wavelets and Bunch Graph Analysis J. Wu, J. M. Pedersen, D. Putthividhya, D. Norgaard,
School of Electrical & Computer Engineering Image Denoising Using Steerable Pyramids Alex Cunningham Ben Clarke Dy narath Eang ECE November 2008.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
CCN COMPLEX COMPUTING NETWORKS1 This research has been supported in part by European Commission FP6 IYTE-Wireless Project (Contract No: )
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
EE565 Advanced Image Processing Copyright Xin Li Statistical Modeling of Natural Images in the Wavelet Space Why do we need transform? A 30-min.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
PCA vs ICA vs LDA. How to represent images? Why representation methods are needed?? –Curse of dimensionality – width x height x channels –Noise reduction.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Lecture 5: Statistical Methods for Classification CAP 5415: Computer Vision Fall 2006.
Computer vision. Applications and Algorithms in CV Tutorial 3: Multi scale signal representation Pyramids DFT - Discrete Fourier transform.
1 Multi Scale Markov Random Field Image Segmentation Taha hamedani.
Texture Analysis and Synthesis. Texture Texture: pattern that “looks the same” at all locationsTexture: pattern that “looks the same” at all locations.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
1 Nonlinear models for Natural Image Statistics Urs Köster & Aapo Hyvärinen University of Helsinki.
Another Example: Circle Detection
Chapter 3: Maximum-Likelihood Parameter Estimation
- photometric aspects of image formation gray level images
PCA vs ICA vs LDA.
By: Kevin Yu Ph.D. in Computer Engineering
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
Texture.
Speech recognition, machine learning
Feature descriptors and matching
Lecture 3 Math & Probability Background ch
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Speech recognition, machine learning
Outline Texture modeling - continued Markov Random Field models
Presentation transcript:

INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp. Vision, 1999 Ramūnas Girdziušas,

Outline Step-1 Markov Random Fields as texture models Step-2 Combining MRF with steerable pyramids Step-3 Optimizing representation by ICA

Introduction What is texture? [Pickett,1970]: ”...large number of elements, each in some degree visible, and, on the whole densely and evenly (possibly randomly) arranged over the field of view such that there is a distinct spatial repetitiveness in the pattern.” [Cross and Jain,1983]: ”...stochastic, possibly periodic, two-dimensional image field.” Main tasks Restoration, Segmentation, Classification, Synthesis Tools Random FieldsCo-occurrence matrices Reaction-diffusion equationsMosaic models Fractal parametersSubband decompositions Higher order statistics We focus on the classification of image textures using MRF modeling of steerable pyramid image representations filtered by ICA.

Step-1: Markov Random Field modeling of texture - Systematic approach based on sound principles. - Modeling of image through local interaction of pixels. Texture classification (MAP) Problem: given an image consisting of more than one texture, determine whether the particular pixel comes from the l-th texture. MAP classifier According to the Bayes’ Theorem: MAP: Find L that maximizes.

The 1st assumption: The 2nd assumption L is a locally dependent Markov Random Field (MRF) with pdf p(L):

The Iterated Conditional Modes (ICM) algorithm (J. Besag, 1983) - Fast alternative to MAP. - Local deterministic relaxation. Algorithm Initialize labeling L according to ML decision. For every epoch k, For every image pixel i, 1.Choose label L(i) that maximizes : 2. Repeat step 1 until no label changes occur. Pros and cons of ICM - avoids the large scale deficiencies; - easily stucks in a local minima.

Step2: Combining MRF modeling with multiresolution approaches Why? -MRF is only suitable for micro-texture. -Biological relevance ? -Invariance properties? -High computational complexity. -Robustness to noise ? What kind of feature spaces to consider? -Invariance to slow-varying bias. -Energy separation while preserving locality. -Steerability (Shiftability). Steerability [Teo, 1998] A function f(x,y) is steerable under Lie group G if any transformation of f can be written as as a linear combination of a fixed, finite set of basis functions :

Steerable pyramids - Introduced to remove some deficiencies of wavelets - The code in Matlab and C is available on the web The Steerable Pyramid is a linear, non-orthogonal, overcomplete, self – inverting, multi-scale, multi-orientation image decomposition. Why is it useful? - The power contained within a subband is invariant under translation of the signal. - At the same scale and position the power in each orientation subband is rotation invariant.

Example: three scales and two orientations

Step-3: Selection of the ”optimal” basis Motivation - Texture is characterized by joint feature pdf. - Typical filter based algorithms do not estimate joint description, marginal statistics are used. - Does a marginal set represent joint pdf well? Approach -Find the basis of a given filter space which generates the most informative marginals for a given texture in a sense that the product of marginal densities most closely approximates the joint pdf

The algorithm -Training For each texture l, Filter texture with a fixed filter bank Demix filter outputs by using ICA Compute the channel histograms -Classification Apply the fixed filter bank to the test image For texture model l, Multiply the filter output vectors by the model ICA matrix W and from channel histograms obtain marginal likelihoods. Compute the conditional likelihoods. Use ICM to obtain pixel labels from.

Few words about texture synthesis Problem: generate an image that matches the appearance of a given texture sample Histogram matching Texture synthesis algorithm

Conclusions -Texture classification can be performed pixelwise using MAP classifier: - conditional independence together with Markov property attacks MAP computational problem; - ICM is fast deterministic approximate MAP. - It is better to consider MRF under different scales, for ex. by decomposing an image using SP. - Classification results can be improved by making features as independent as possible. - More textures can be synthesized using shifted versions of filters and then performing ICA. - In general, ICA application in texture analysis makes sense: -Textures are non-gaussian intensity processes -Wavelet representations are non-gaussian too. -In particular,...

Is the most informative likelihood the desired criterion of optimality? Ex. [Randen,1997]: PCA: ->0.01%. MOT: -> 67%. More to read Similar ideas without ICA: D. Heeger, J. Bergen, Pyramid based texture analysis/synthesis, Proc. SIGGRAPH, August Representation vs. separation: T. Randen, Filter and filter bank design for image texture recognition, PhD.Thesis, Naive Bayes can be optimal even when an independence is violated: Domingos P., Pazzani, M., Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier, Proc. ICML, Everything about the steerable pyramids: