Measuring Bidirectional Texture Reflectance with a Kaleidoscope

Slides:



Advertisements
Similar presentations
Lecture 8 Transparency, Mirroring
Advertisements

Chapter 23.
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
Correcting Projector Distortions on Planar Screens via Homography
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Eyes for Relighting Extracting environment maps for use in integrating and relighting scenes (Noshino and Nayar)
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Computer Vision - A Modern Approach Set: Radiometry Slides by D.A. Forsyth Radiometry Questions: –how “bright” will surfaces be? –what is “brightness”?
Chapter 36 Image Formation.
1 UCT PHY1025F: Geometric Optics Physics 1025F Geometric Optics Dr. Steve Peterson OPTICS.
COMPUTER GRAPHICS CS 482 – FALL 2014 OCTOBER 6, 2014 TEXTURE MAPPING TEXTURES BUMP MAPPING ENVIRONMENT MAPPING PROCEDURAL TEXTURING.
1 Laser Beam Coherence Purpose: To determine the frequency separation between the axial modes of a He-Ne Laser All sources of light, including lasers,
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Cameras (Reading: Chapter 1) Goal: understand how images are formed Camera obscura dates from 15 th century Basic abstraction is the pinhole camera Perspective.
More complex material models. Taxonomy 2 Single-wavelength Scattering function = 8D Bidirectional Texture Function (BTF) Spatially-varying BRDF (SVBRDF)
Measurement, Inverse Rendering COMS , Lecture 4.
Reflectance and Texture of Real-World Surfaces KRISTIN J. DANA Columbia University BRAM VAN GINNEKEN Utrecht University SHREE K. NAYAR Columbia University.
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 3 Ravi Ramamoorthi
Rendering General BSDFs and BSSDFs Randy Rauwendaal.
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
© 2009, TSI Incorporated Stereoscopic Particle Image Velocimetry.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
lecture 2, linear imaging systems Linear Imaging Systems Example: The Pinhole camera Outline  General goals, definitions  Linear Imaging Systems.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Automatic Camera Calibration
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham.
Ray Optics: Reflection and Refraction Rays Representation of the path that light follows Represent beams of light that are composed of millions.
By: Liz and Sabrina. W HAT IS A LENS ? A lens is a transparent optical device used to converge or diverge transmitted light and to form images.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
02/28/05© 2005 University of Wisconsin Last Time Scattering theory Integrating tranfer equations.
3D SLAM for Omni-directional Camera
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Integral University EC-024 Digital Image Processing.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
Capturing light Source: A. Efros.
Obtaining the Bidirectional Texture Reflectance of Human Skin by means of a Kaleidoscope Jude Radloff Research Project Presentation Supervised by Shaun.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Computer Graphics Global Illumination: Photon Mapping, Participating Media Lecture 12 Taku Komura.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
Steve Sterley. Real World Lighting Physical objects tend to interact with light in three ways: Absorption (black body) Reflection (mirror) Transmission.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Obtaining Bidirectional Texture Reflectance of Human Skin by means of a Kaleidoscope Jude Radloff Supervised by Shaun Bangay and Adele Lobb Computer Science.
Light very often travels in straight lines. We represent light using rays, which are straight lines emanating from an object. This is an idealization,
Laser-Based Finger Tracking System Suitable for MOEMS Integration Stéphane Perrin, Alvaro Cassinelli and Masatoshi Ishikawa Ishikawa Hashimoto Laboratory.
Rendering Synthetic Objects into Real- World Scenes by Paul Debevec SIGGRAPH 98 Conference Presented By Justin N. Rogers for Advanced Comp Graphics Spring.
Illumination and Shading
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
The law of reflection: The law of refraction: Image formation
Bounding Volume Hierarchy. The space within the scene is divided into a grid. When a ray travels through a scene, it only passes a few boxes within the.
Local Illumination and Shading
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Mapping: Image Texturing CPSC 591/691. Texture Mapping Two-dimensional techniques place a two-dimensional (flat) image onto an object using methods similar.
Basics Reflection Mirrors Plane mirrors Spherical mirrors Concave mirrors Convex mirrors Refraction Lenses Concave lenses Convex lenses.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.

Image-based Lighting Computational Photography
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
3D Graphics Rendering PPT By Ricardo Veguilla.
Chapter 22/23.
Coding Approaches for End-to-End 3D TV Systems
Path Tracing (some material from University of Wisconsin)
Presentation transcript:

Measuring Bidirectional Texture Reflectance with a Kaleidoscope Jefferson Y. Han & Ken Perlin Media Research Laboratory New York University Slides by Ingrid Montealegre 34:15 total + 30% = ~45 minutes

Introduction Recent work in realistic image synthesis focuses on use of actual data measurements of surfaces and materials Want to capture reflectance properties of a surface (exitant and incident angle of light) High dimensionality makes dense sampling difficult New technique for measuring the appearance of a textured surface as it is viewed or illuminated from different directions

Texture Reflectance Reflectance of a surface characterized by its Bidirectional Reflectance Distribution Function (BRDF) [Nicodemus et al. 1977] BRDF is a 4 dimensional function that describes light from it’s incident and exitant angle BRDF(i,i,e,e) Real objects are not uniform and are not accurately represented by a single BRDF function

Texture Reflectance Dana et al. [1999] introduce Bidirectional Texture Function (BTF) which parameterizes BRDF to allow for spatially varying reflectance BTF accurately captures surface subtleties (including self-occlusion and self-shadowing) It is a large 6D function and is difficult to obtain a dense sample - BTF(u,v,i,i,e,e)

BTF Measurement Seminal work by Dana et al. presents a 3DOF robotic arm that incrementally rotates/tilts a sample in front of a light source Produces 205 total samples w/even distribution requires sample to be affixed to a device, in situ measurements not possible Perlin and Han’s approach measures textured surfaces (BTF) in situ as viewed or illuminated from different directions

BTF using a Kaleidoscope Based on the principle of the kaleidoscope [Brewster 1819] Hollow tube of polygonal cross-section whose inner walls are lined with front surface mirrors Object at end of kaleidoscope appears to multiply into replicated images of itself

View through a Kaleidoscope

BTF using a Kaleidoscope When far end is tapered, sample will look like a faceted virtual sphere since each successive reflection reorients the sample a little further from the perpendicular Analogous to having an entire array of cameras pointing toward a surface Optimal for measuring the BTF

Optical Schematic Optical paths of camera and projector are merged using a 45º beam splitter

Illumination Also an illumination technique When projector is pointed down kaleidoscope, different pixels of the image will arrive at the projected sample after having reflected in different ways, approaching the sample from various directions Different regions of the projected image behave like different light sources By keeping only certain pixels of projected image bright, can choose direction from which to illuminate the sample

Procedure BTF measurement proceeds by taking a sequence of sub-measurements At each sub-measure exactly one region of the illumination image is bright Each region corresponds to a unique sequence of reflections of light off the kaleidoscope walls Sample will be illuminated from a unique sub-range of incoming light directions Complete measurement uses all regions to illuminate the sample

Advantages No moving parts guaranteeing registration of sub-measurements & improved accuracy Can measure in situ Can be use for skin and loose items such as pebbles Requires a single camera, lower cost Richly samples the BTF Initial prototype captured 484 illuminations (comp. 205)

Design Parameters In general, kaleidoscope can be made as a regular polygon of n sides, for n>=3 Not every virtual facet is complete, # of fragmented facets varies with n Image processing is most easily performed on rectangular images, for n!=4, use largest inscribable square Focused on n=3 for simplicity and has largest number of whole facets

Simulations of n

Choice of Taper Angle Large taper angle causes reflections to tilt further away from normal, producing fewer facets that are visible Narrow taper angle causes more visible facets, capturing more facets in a single view results in fewer pixels per view, reducing spatial resolution

Choice of Taper Angle Chose taper angle that tilts from vertical by 9º Provides 4 orders of magnitude of reflections from the horizon, a final effect angle of 76º Can measure 22 complete views, providing 222 = 484 distinct view/illumination pairs

Experimental Setup

Calibration Used a planar 3x3 checkerboard pattern & performed corner detection to id sub-pixel coordinates Used to compute best homography transform that maps each patch to the unit square Each transformation applied to the 22 imaging shots and saved to disk Result is a 22x22 array of images indexed by projector facet and camera facet Correction for lens distortion of the camera needs to be done once using the technique by Zhang [1999]

Full 22x22 BTF Measurements

Illumination Alignment Needed to determine which pixels in the projected image illuminated each kaleidoscopically reflected image of the surface In current work, done manually Image from a video camera peering into kaleidoscope as a guide Ideally done automatically Needs to be done once

2 Different Illumination Angles

BSSTF Trials For surfaces with appreciable sub-surface scattering measure BSSRDF (Bidirectional Scattering Surface Reflectance Distribution Function) Illuminate a small spot on the sample and then measure surrounding region [Jensen et al. 2001] Incrementally moving the spot can measure Bidirectional Scattering Surface Texture Function BSSTF(ui,vi,ue,ve,i,i,e,e) Uses two dimensions for each of four measures Entry point of light on sample Exit point of light on sample Incoming spherical angle Outgoing spherical angle

Two BSSTF Measurements

Future Work Improve image extraction to utilize the data currently ignored Add high dynamic range (HDR) capture capability by taking multiple image captures at varying exposure lengths Adjust lenses and cameras used to make more compact Make a room sized version to capture movement