The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.

Slides:



Advertisements
Similar presentations
Matrix M contains images as rows. Consider an arbitrary factorization of M into A and B. Four interpretations of factorization: a)Rows of B as basis images.
Advertisements

Active Appearance Models
Recognizing Surfaces using Three-Dimensional Textons Thomas Leung and Jitendra Malik Computer Science Division University of California at Berkeley.
Indexing with Unknown Illumination and Pose Ira Kemelmacher Ronen Basri Weizmann Institute of Science.
Computer graphics & visualization Global Illumination Effects.
Environment Mapping CSE 781 – Roger Crawfis
Sami Romdhani Volker Blanz Thomas Vetter University of Freiburg
Figure-centric averages Antonio Torralba & Aude Oliva (2002) Averages: Hundreds of images containing a person are averaged to reveal regularities in the.
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Ongoing Challenges in Face Recognition Peter Belhumeur Columbia University New York City.
Lighting affects appearance. What is the Question ? (based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each.
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
Face Recognition in Hyperspectral Images Z. Pan, G. Healey, M. Prasad and B. Tromberg University of California Published at IEEE Trans. on PAMI Vol 25,
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
BMVC 2009 Specularity and Shadow Interpolation via Robust Polynomial Texture Maps Mark S. Drew 1, Nasim Hajari 1, Yacov Hel-Or 2 & Tom Malzbender 3 1 School.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Face Recognition Under Varying Illumination Erald VUÇINI Vienna University of Technology Muhittin GÖKMEN Istanbul Technical University Eduard GRÖLLER Vienna.
Data-driven Methods: Faces : Computational Photography Alexei Efros, CMU, Fall 2007 Portrait of Piotr Gibas © Joaquin Rosales Gomez.
Principal Component Analysis
Shadow removal algorithms Shadow removal seminar Pavel Knur.
Computer Graphics (Fall 2005) COMS 4160, Lecture 16: Illumination and Shading 1
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Face Recognition Based on 3D Shape Estimation
Reflectance and Texture of Real-World Surfaces KRISTIN J. DANA Columbia University BRAM VAN GINNEKEN Utrecht University SHREE K. NAYAR Columbia University.
Active Appearance Models Computer examples A. Torralba T. F. Cootes, C.J. Taylor, G. J. Edwards M. B. Stegmann.
Near-Regular Texture Analysis and Manipulation Written by: Yanxi Liu Yanxi Liu Wen-Chieh Lin Wen-Chieh Lin James Hays James Hays Presented by: Alex Hadas.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
Venus. Classification Faces – Different Faces -- Same.
Face Collections : Rendering and Image Processing Alexei Efros.
Object recognition under varying illumination. Lighting changes objects appearance.
Learning the Appearance of Faces: A Unifying Approach for the Analysis and Synthesis of Images. Thomas Vetter Germany University of Freiburg
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Input: Original intensity image. Target intensity image (i.e. a value sketch). Using Value Images to Adjust Intensity in 3D Renderings and Photographs.
Analysis of Lighting Effects Outline: The problem Lighting models Shape from shading Photometric stereo Harmonic analysis of lighting.
Digital Face Replacement in Photographs CSC2530F Project Presentation By: Shahzad Malik January 28, 2003.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
1 Recognition by Appearance Appearance-based recognition is a competing paradigm to features and alignment. No features are extracted! Images are represented.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
Lighting affects appearance. How do we represent light? (1) Ideal distant point source: - No cast shadows - Light distant - Three parameters - Example:
Understanding the effect of lighting in images Ronen Basri.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
CS559: Computer Graphics Lecture 8: Warping, Morphing, 3D Transformation Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Multimodal Interaction Dr. Mike Spann
Face Detection using the Spectral Histogram representation By: Christopher Waring, Xiuwen Liu Department of Computer Science Florida State University Presented.
Announcements Final Exam Friday, May 16 th 8am Review Session here, Thursday 11am.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
CS 691B Computational Photography Instructor: Gianfranco Doretto Data Driven Methods: Faces.
Lighting affects appearance. Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering.
Render methods. Contents Levels of rendering Wireframe Plain shadow Gouraud Phong Comparison Gouraud-Phong.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
EECS 274 Computer Vision Sources, Shadows, and Shading.
EE368 Final Project Spring 2003
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
MAN-522 Computer Vision Spring
Face Recognition and Feature Subspaces
Common Classification Tasks
Data-driven Methods: Faces
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Image Based Modeling and Rendering (PI: Malik)
Object Modeling with Layers
(c) 2002 University of Wisconsin
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
Computational Photography
Shape from Shading and Texture
Presentation transcript:

The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew University

Example: 3 basis images The movie is created by simply linearly combining the 3 basis images.

Given a single image y = And a database of other images of the same “class” Image Synthesis: We would like to generate new images from y simulating change of illumination. Bootstrap Set light ID

Recognition Given a database of images of the same “class”, under varying illumination conditions and a novel image Match between images of the same object.

Definition: Ideal Class of Objects The images produced by an ideal class of objects are Whereis the albedo (surface texture) of object i of the class is the normal direction (shape) of the surface, i.e., all objects have the same shape. is the point light source direction.

Related Work Basic result :Shashua 91,97 Application and related systems:Hallinan 94, Kriegman et. al 96,98 Work on “class-based” synthesis and recognition of images -mostly with varying viewing positions: Basri 96, Bymer & Poggio 95, Freeman & Tenenbaum 97, Vetter & Blanz 98, Vetter, Jones & Poggio 97, 98 Edelman 95, Atick, Griffin & Redlich 97. Rendering under more general assumption: Dorsey et al. 93,94 Linear class : Vetter & Poggio 97,92 Additive error term : Sali & Ulman 98 Reflectance Ratio: Nayar & Bolle 96

Basic Assumptions Lambertian surface, linear model: no cast shadows, no highlights. Same view point (canonical view). Ideal class assumption Images have the same size and are (roughly) aligned.

The Quotient image: Definition Given images of objects y and a respectively, under illumination S = Thusdepends only on relative surface information and is independent of illumination. (pixel by pixel division)

The Quotient image Method: Proposition Let 3 images of object a. LetImage of object y illuminated by light s. Moreover, the image space of y is spanned by varying the coefficients. that satisfy: Then, there exist

The Quotient image Method: Proof Illuminated by:

a1 a2 a3 Pa a1 a2 a3 Q-Image * = ? N=1 s s1 s2 s3 The Quotient image Method: N=1 s

Pa a1 a2 a3 * = Q-Image N=1 The Quotient image Method: N=1 Synthesized Image

The Quotient image Method: Conclusions one can generateand all other images of the image space of y. Given and the coefficients that satisfies then readily follows In order to obtain the correct coefficients a bootstrap set of more than one object is needed.

The Quotient image Method: N>1 Original image s

The Quotient image Method: Theorem-1 The energy function has a (global) minimum, if the albedo of object y is rationally spanned by the bootstrap set. i.e if there exist coefficients such that

The Quotient Image Method: Solving For X and Min We also have:

forwritten explicitly The Quotient image Method: Solving For X and

The Algorithm Given : a bootstrap set and a novel image homogenous system of linear equations in Scale such that Compute Where A is the average of For all choices of z Use the minimization function: Min to generate

Samples of few faces out of 9*200 faces images taken from T. Vetter database which was mainly used as a bootstrap set and as a source for novel images in the further demonstration. Frontal faces : Collection of objects all have the same shape but differ in their surface texture (albedo)... light ID

Synthesis from Single Picture And 10 faces from the bootstrap set under 3 different light conditions Synthesis from 3 pictures Linear Combination Quotient Method

10 other faces from the data base, each under 3 light conditions Synthesis from 3 pictures Synthesis from Single image and the bootstrap set

Synthesis from 3 pictures Synthesis from Single image and the bootstrap set Bootstrap Set Original image Quotient Image

(0,+20)(+50,0)(-50,0)(0,-35)(0,0) Original Images Compared to Q-Image Synthesized Images 1st Row: Original Images 2nd Row: Q-Image Synthesized Images 3rd Row: Exact Values of Light Direction: center, down, up, right, left

Light Coefficient Comparison Ground Truth Vs. Q-Image Coefficients 1st Coefficient 2nd Coefficient 3rd Coefficient

Using Different Database Animation Using the database 3x3 images’ Database The Quotient Image The original image

The Original images Q images, 1 object bootstrap set Q images, 10 object bootstrap set

Handling Color Images RGB HSV Transformation Original color image RGB H S V

Original ImageQuotient Image Synthesized Sequence

Original Images Quotient Images Synthesized Sequences Monica and Bill Under a New Light

Original ImageQuotient Image Synthesized Sequence

Recognition under varying illumination Each object in the database is represented by its quotient image only. The quotients can be made of images with varying illuminations. The quotient images was generated out of N*3 (N=20) base images. A= B= C=... N Database generation

Identification Given a new image of an object appears in the data base under any light condition, it’s quotient is computed from A,B,C … (as was done in the database generation). Then It is compared to the quotients in the data base. Other methods used for comparison 1. Correlation Database: Each object is represented in the database by it’s image under any/certain lightening condition. Identification: Correlationbetween the test image to the images stored in the database. 2. PCA Database: Applying PCA on the objects’ images + 3*20 additional images of 20 objects under 3 illumination (to compare conditions to the quotient method). Having eigen vectors, each object is represented by it’s eigen vectors’ coefficients. Identification: Comparison (LSE) between the test image coefficients (generated the same way as the database) and the database.

Recognition Results Quotient method comparing to correlation

Recognition Results - cont Quotient Method Vs. PCA

The End