Matrix M contains images as rows. Consider an arbitrary factorization of M into A and B. Four interpretations of factorization: a)Rows of B as basis images.

Slides:



Advertisements
Similar presentations
A Common Framework for Ambient Illumination in the Dichromatic Reflectance Model Color and Reflectance in Imaging and Computer Vision Workshop 2009 October.
Advertisements

What do color changes reveal about an outdoor scene? Kalyan Sunkavalli Fabiano Romeiro Wojciech Matusik Todd Zickler Hanspeter Pfister Harvard University.
1 Photometric Stereo Reconstruction Dr. Maria E. Angelopoulou.
Photometric Image Formation
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Invariants (continued).
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Lecture 35: Photometric stereo CS4670 / 5670 : Computer Vision Noah Snavely.
Environment Mapping CSE 781 – Roger Crawfis
Spherical Convolution in Computer Graphics and Vision Ravi Ramamoorthi Columbia Vision and Graphics Center Columbia University SIAM Imaging Science Conference:
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Frequency Domain Normal Map Filtering Charles Han Bo Sun Ravi Ramamoorthi Eitan Grinspun Columbia University.
Advanced Computer Graphics
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Scalability with many lights II (row-column sampling, visibity clustering) Miloš Hašan.
Lighting affects appearance. What is the Question ? (based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each.
Torrance Sparrow Model of Reflectance + Oren Nayar Model of Reflectance.
Rendering with Environment Maps Jaroslav Křivánek, KSVI, MFF UK
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Shadow removal algorithms Shadow removal seminar Pavel Knur.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University.
Computational Fundamentals of Reflection COMS , Lecture
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 10: Global Illumination Ravi Ramamoorthi Some images courtesy.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Mark S. Drew and Amin Yazdani Salekdeh School of Computing Science,
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
Object recognition under varying illumination. Lighting changes objects appearance.
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Introduction of the intrinsic image. Intrinsic Images The method of Finlayson & Hordley ( 2001 ) Two assumptions 1. the camera ’ s sensors are sufficiently.
Photometric Stereo & Shape from Shading
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Shading (introduction to rendering). Rendering  We know how to specify the geometry but how is the color calculated.
Last tuesday, you talked about active shape models Data set of 1,500 hand-labeled faces 20 facial features (eyes, eye brows, nose, mouth, chin) Train 40.
Sections 1.8/1.9: Linear Transformations
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Real-Time Rendering Digital Image Synthesis Yung-Yu Chuang 01/03/2006 with slides by Ravi Ramamoorthi and Robin Green.
Capturing light Source: A. Efros.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University SIGGRAPH 2001 Stanford University SIGGRAPH.
Lighting affects appearance. How do we represent light? (1) Ideal distant point source: - No cast shadows - Light distant - Three parameters - Example:
PCB Soldering Inspection. Structured Highlight approach Structured Highlight method is applied to illuminating and imaging specular surfaces which yields.
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Diffuse Reflections from Rough Surfaces Lecture #5
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
Local Illumination and Shading
Lighting affects appearance. Lightness Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering.
A Statistical Approach to Texture Classification Nicholas Chan Heather Dunlop Project Dec. 14, 2005.
Reflectance Function Estimation and Shape Recovery from Image Sequence of a Rotating object Jiping Lu, Jim Little UBC Computer Science ICCV ’ 95.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
EECS 274 Computer Vision Sources, Shadows, and Shading.
1 Resolving the Generalized Bas-Relief Ambiguity by Entropy Minimization Neil G. Alldrin Satya P. Mallick David J. Kriegman University of California, San.
Advanced Computer Graphics
MAN-522 Computer Vision Spring
Advanced Computer Graphics
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
Lecture 28: Photometric Stereo
Computer Graphics (Fall 2003)
CS 480/680 Computer Graphics Shading.
Physical Problem of Simultaneous Linear Equations in Image Processing
Presentation transcript:

Matrix M contains images as rows. Consider an arbitrary factorization of M into A and B. Four interpretations of factorization: a)Rows of B as basis images. b)Cols of A as basis profiles. c)The Lambertian Case: When k = 3, rows of A may be thought of as light vectors while the columns of B will correspond to pixel normals (ignoring shadows). d)The Reflectance Map Interpretation: Distant viewer, distant lighting, single BRDF => pixel intensity is a function of the normal alone. R(n): Reflectance Map, can be encoded as an image of a sphere R,of the same material and taken under similar conditions. An image I i = R i T * D, where D is normal-indicator matrix, i.e., D(j,k) = 1 iff normal at the k th pixel of the scene is same as the normal at the j th pixel of the sphere image. Theoretical contributions: Dimensionality results for multiple materials. Extension to images taken from different viewpoints, low dimensional family of BRDFs, filtered images. Results for images captured through physical sensors. Experiments and applications: Experimental results on BRDF databases (CUReT). Modelling appearance of diverse Internet Photo Collections using low rank linear models and demonstrating applications of the same. The Dimensionality of Scene Appearance Rahul Garg, Hao Du, Steven M. Seitz and Noah Snavely Low rank approximation of image collections (e.g., via PCA) is a popular tool in Computer Vision. Yet, surprisingly little is known to justify the observation that images of a scene tend to be low dimensional, beyond the special case of Lambertian scenes. We consider two questions in this work: When can we capture the variability in appearance using a linear model? What is the linear dimensionality under different cases? Introduction Previous Results Factorization Framework Distant Viewer Distant Lighting LambertianNo Shadows 3 basis images [Shashua92] Distant Viewer Distant Lighting LambertianAttached Shadows 5 basis images [Ramamoorthi02] Distant Viewer Distant Lighting Single Arbitrary BRDF Attached Shadows # of normals [Belhumeur and Kriegman98] = M mxn = Theoretical Results Assumptions: Distant viewer and lighting, no cast shadows. Basic Result: From structure of D, rank(M) rank(D) # of normals Extensions: (follow from factorizations) ConditionsDimensionality K ρ different BRDFs, K n normalsKρKnKρKn K ρ dimensional family of BRDFs, K n normalsKρKnKρKn Images taken from different viewpoints, geometrically aligned KρKnKρKn Filtered images – filtered by K f dimensional family of filtersKf KρKnKf KρKn K ρ different BRDFs, i th BRDF of dimensionality K(i) (For e.g., Lambertian is rank 3) ΣK(i) Real world images captured by physical sensors: Using linear response model: Where I i (x) is the intensity at pixel x, s i (λ) is the sensor spectral response, and I i (x, λ) is the light of wavelength λ incident on the sensor at pixel x. BRDF as a function of λ: α(λ)ρ(ω,ώ,λ), where α(λ) is a wavelength dependent scalar (albedo). Results: Basis 1 Basis 2Basis 3 Basis 4 Basis 5 1 basis3 bases5 bases10 bases20 bases Reconstruction results Pixel Normals M m X n A m X 3 B 3 X n Light Direction ConditionsDimensionality K α different albedos, K ρ different BRDFS, K n normals KαKρKnKαKρKn ρ(ω,ώ,λ) independent of λ, light sources with same spectra, similar physical sensors KρKnKρKn Conclusion Original Image First 5 basis images for Orvieto Cathedral Input Images Results for Internet Photo Collections Project registered images onto 3D model. There is missing data in each observation as an image captures only a part of the scene – use EM to learn the basis appearances. University of Washington Cornell University Reconstruction accuracy for materials in CUReT database using 1,2,3,6,10 and 20 bases respectively Profile of a pixel Experiments on CUReT BRDF Database The first six basis images are also shown in the image on the right, which are remarkably similar to the analytical basis images used to span the appearance space of a Lambertian sphere by Ramamoorthi02. First 6 basis spheres computed from CUReT database 5 Analytical Lambertian basis images [Ramamoorthi02] 61 different materials. Rendered 50 X 50 images of a sphere of each material under 100 different distant directional illuminations (6100 images in total). A universal basis was learned from all 6100 images to gauge the range of materials present in the database. Surprisingly, the reconstruction error when using a universal basis was not found to be very different from the case when a separate basis was learned for each material. The average RMS reconstruction accuracy was 85% using only six universal bases. Applications Occluder Removal View Expansion: Images cover only a part of the scene. Reconstruct the rest of the scene under similar illumination conditions. Occluder Removal: Remove the foreground objects and hallucinate the scene behind. View Expansion A B m x k k x n m x n