How natural scenes might shape neural machinery for computing shape from texture? Qiaochu Li (Blaine) Advisor: Tai Sing Lee.

Slides:



Advertisements
Similar presentations
Shapelets Correlated with Surface Normals Produce Surfaces Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Advertisements

Coefficient of Determination- R²
Planar Orientation from Blur Gradients in a Single Image Scott McCloskey Honeywell Labs Golden Valley, MN, USA Michael Langer McGill University Montreal,
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Forecasting Using the Simple Linear Regression Model and Correlation
Data preprocessing before classification In Kennedy et al.: “Solving data mining problems”
Computer Vision Lecture 16: Texture
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
São Paulo Advanced School of Computing (SP-ASC’10). São Paulo, Brazil, July 12-17, 2010 Looking at People Using Partial Least Squares William Robson Schwartz.
Non-metric affinity propagation for unsupervised image categorization Delbert Dueck and Brendan J. Frey ICCV 2007.
3-D Depth Reconstruction from a Single Still Image 何開暘
Statistics of Natural Image Categories Antonio Torralba and Aude Oliva. Network: Computation in Neural Systems, 14(2003) Jonathan Huang
Simple Linear Regression
Surface Variation and Mating Surface Rotational Error in Assemblies Taylor Anderson UGS June 15, 2001.
1 Visual Information Extraction in Content-based Image Retrieval System Presented by: Mian Huang Weichuan Dong Apr 29, 2004.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
1 Simple Linear Regression Chapter Introduction In this chapter we examine the relationship among interval variables via a mathematical equation.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Lecture 19 Simple linear regression (Review, 18.5, 18.8)
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
Facial Recognition CSE 391 Kris Lord.
1B50 – Percepts and Concepts Daniel J Hulme. Outline Cognitive Vision –Why do we want computers to see? –Why can’t computers see? –Introducing percepts.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Overview Introduction to local features
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Simple Linear Regression Models
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Recognition using Regions (Demo) Sudheendra V. Outline Generating multiple segmentations –Normalized cuts [Ren & Malik (2003)] Uniform regions –Watershed.
Topic 10 - Image Analysis DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Low Level Visual Processing. Information Maximization in the Retina Hypothesis: ganglion cells try to transmit as much information as possible about the.
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
B. Krishna Mohan and Shamsuddin Ladha
Robust Functional Mixed Models for Spatially Correlated Functional Regression -- with Application to Event-Related Potentials for Nicotine-Addicted Individuals.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Autonomous Robots Vision © Manfred Huber 2014.
3D Face Recognition Using Range Images
Automated Fingertip Detection
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Ch.8 Efficient Coding of Visual Scenes by Grouping and Segmentation Bayesian Brain Tai Sing Lee and Alan L. Yuille Heo, Min-Oh.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Understanding early visual coding from information theory By Li Zhaoping Lecture at EU advanced course in computational neuroscience, Arcachon, France,
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
1 Computational Vision CSCI 363, Fall 2012 Lecture 16 Stereopsis.
Sensation & Perception. Motion Vision I: Basic Motion Vision.
From cortical anisotropy to failures of 3-D shape constancy Qasim Zaidi Elias H. Cohen State University of New York College of Optometry.
Development of Sound Localization 2 How do the neural mechanisms subserving sound localization develop?
Ko Youngkil.  Biological Evidence  5 prediction  compare with experimental results  Primary visual cortex (V1) is involved in image segmentation,
Representation in Vision Derek Hoiem CS 598, Spring 2009 Jan 22, 2009.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Independent Component Analysis features of Color & Stereo images Authors: Patrik O. Hoyer Aapo Hyvarinen CIS 526: Neural Computation Presented by: Ajay.
Final Year Project Presentation --- Magic Paint Face
Linear Regression.
Linear Regression.
Computer Vision Lecture 16: Texture II
A Review in Quality Measures for Halftoned Images
Texture.
SIFT keypoint detection
Motion-Based Analysis of Spatial Patterns by the Human Visual System
Fourier Transform of Boundaries
Load forecasting Prepared by N.CHATHRU.
Single Image Vignetting Correction
Presentation transcript:

How natural scenes might shape neural machinery for computing shape from texture? Qiaochu Li (Blaine) Advisor: Tai Sing Lee

Shape from Texture Problem Frequency gradient Orientation gradient

Slants & Tilts of Planes Corentin Massot, et al. Model of Frequency Analysis in the Visual Cortex and the Shape from Texture Problem. IJCV

Scientific Question Mathematically, slant and tilt can be solved based on estimates of frequency and orientation gradients. But how does the brain do it? Conjecture: Brain learns association between images and 3D structures – so upon seeing an image, the brain can infer the underlying 3D structure. Objective: Study images conditioned on each 3D shape to see if there are characteristic image features associated with each slant and tilt, or varied with 3D shape.

Natural Scenes as a media Mathematic Model 3D Perception Natural Scenes Statistical learning Algebra, frequency analysis Association, probabilistic inference Physical models of image formation

Approach Q: Is it possible for the brain to discover image features such as “spatial frequency gradient” and “orientation gradient” from natural scenes? A: We will fit slant-and-tilt planes to 3D range data, and then analyze the images condition on slant and tilt of the plane.

Methodology

What do we HAVE ? Images with depth information (CMU depth dataset) Optical Image Range Image

What do we WANT ? Discover image features from natural scenes associated with different 3D shape (i.e. slant and tilt). Can we see evidence of “spatial frequency gradient”, and “orientation gradient”?

Approach Stage 1 Processing of 3D range data Partition each image into different regions. Fit slant and tilt planes to patches within each region. Stage 2 Analysis of 2D optical image Retina processing Frequency analysis Principal component analysis

Approach Shape 1 Shape 2 Brain Feature 1 Feature 2 Statistical and Frequency Analysis Stage 1 Stage 2

Stage 1

Partition Normalized-Cut algorithm Segmentation 5 Parts Over-Segmentation 30 Parts Segmentation Omission Jianbo Shi, et al. Normalized Cuts and Image Segmentation. PAMI

Compute 3D Shape Fit range data with plane by regression

Computing Precisely Threshold on sum of squared residual (SSR) Small SSR large SSR

Our Database Natural Scenes Patch Set Optical Patches 3D shape

Stage 2

Focus on TEXTURE Retina Processing KILL LUMINANCE SAVE TEXTURE &

Frequency Analysis Use Fast Fourier Transform Focus on frequency information Neurons in V1 can perform windowed Fourier transform. Expectation Some frequency gradient across space within a patch. The farther away, the higher the frequency.

Principal Component Analysis Principal Component Analysis (PCA) Direction of significant variations of data distribution. Neurons are known to be able to discover principle components of input. Expectation in the PCs Frequency gradient (chirp) Orientation gradient

Results

Average power spectrum (radial frequency) A set of patches Top Part Bottom Part

Principal Components Slant 0, Tilt 0. Slant 75, Tilt 0.

Principal Components Slant 45, Tilt 45.

Principal Components Slant 75, Tilt 90. See frequency gradient

Conclusions Preliminary evidence showing spatial frequency gradient can be discovered from natural scenes. Effect is small due to small patch size. Orientation gradient is not evident, but maybe if you use polar angle … Since spatial frequency gradient is correlated with slant and tilt, it is possible that neurons can learn such association.

Future Direction How the brain can learn this association? Simulation on associative learning: Hebbian Learning V1 Shape Neurons V1 Image Neurons V2 Shape Neurons V2 Image Neurons Natural Scenes Hebbian learning

Future Direction Independent component analysis Extract distribution of independent features specific to each slant and tilt. Better for discovering spatial frequency and orientation gradients based on these distributions. Good models for V1 neurons. Prediction: different spatial distribution of features (independent components) for different slant and tilts.

Q&A

THANKS