Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.

Slides:



Advertisements
Similar presentations
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Advertisements

Computer graphics & visualization Global Illumination Effects.
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Environment Mapping CSE 781 – Roger Crawfis
Cs123 INTRODUCTION TO COMPUTER GRAPHICS Andries van Dam © Andries van Dam Texture Mapping Beautification of Surfaces 1/23.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Light Field Rendering Shijin Kong Lijie Heng.
Xianfeng Gu, Yaling Wang, Tony Chan, Paul Thompson, Shing-Tung Yau
HCI 530 : Seminar (HCI) Damian Schofield.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
(conventional Cartesian reference system)
Fast Image Replacement Using Multi-Resolution Approach Chih-Wei Fang and Jenn-Jier James Lien Robotics Lab Department of Computer Science and Information.
Parallelizing Raytracing Gillian Smith CMPE 220 February 19 th, 2008.
Surface Light Fields for 3D Photography Daniel N. Wood University of Washington SIGGRAPH 2001 Course.
Estimation / Approximation Problems in 3D Photography Tom Duchamp, Department of Mathematics Werner Stuetzle, Department of Statistics University of Washington.
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
Surface Light Fields for 3D Photography Daniel Wood Daniel Azuma Wyvern Aldinger Brian Curless Tom Duchamp David Salesin Werner Stuetzle.
A Theory of Locally Low Dimensional Light Transport Dhruv Mahajan (Columbia University) Ira Kemelmacher-Shlizerman (Weizmann Institute) Ravi Ramamoorthi.
1 Computation on Arbitrary Surfaces Brandon Lloyd COMP 258 October 2002.
The Radiosity Method Donald Fong February 10, 2004.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Hidden Surface Removal
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
SVD(Singular Value Decomposition) and Its Applications
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
1 Texturing. 2 What is Texturing? 3 Texture Mapping Definition: mapping a function onto a surface; function can be:  1, 2, or 3D  sampled (image) or.
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
COMP 175: Computer Graphics March 24, 2015
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Interactive Rendering of Meso-structure Surface Details using Semi-transparent 3D Textures Vision, Modeling, Visualization Erlangen, Germany November 16-18,
-Global Illumination Techniques
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Geometric Modeling using Polygonal Meshes Lecture 1: Introduction Hamid Laga Office: South.
Sebastian Enrique Columbia University Relighting Framework COMS 6160 – Real-Time High Quality Rendering Nov 3 rd, 2004.
A Method for Registration of 3D Surfaces ICP Algorithm
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
Efficient Streaming of 3D Scenes with Complex Geometry and Complex Lighting Romain Pacanowski and M. Raynaud X. Granier P. Reuter C. Schlick P. Poulin.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
112/5/ :54 Graphics II Image Based Rendering Session 11.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
Vector Quantization CAP5015 Fall 2005.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
1 CSCE 441: Computer Graphics Hidden Surface Removal Jinxiang Chai.
Dual Representations for Light Field Compression EE368C Project January 30, 2001 Peter Chou Prashant Ramanathan.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Danfoss Visual Inspection System
- Introduction - Graphics Pipeline
Algorithms in the Real World
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
Recognition: Face Recognition
Feature descriptors and matching
Directional Occlusion with Neural Network
Presentation transcript:

Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho

Eigen-Texture Method Appearance Compression based on 3D modeling  Authors Ko Nishino, Yochihi Sato, Katsushi Ikuechi (CVPR 99)  texture/index.html texture/index.html

Appearance based method Inputs : 3D Geometry and a set of color images Outputs : Synthesized images from arbitrary viewing points. Objective : Compressed representation of appearance of model

Eigen-Texture method Overview encoding decoding sampling

Sampling  3D geometry is captured by range camera.  Photos are taken by rotating the object and registered into the geometry.  Each color image is divided into small areas according to their corresponding triangles in the object.  Cell : Normalized triangle patches.  Cell image : Warped color image  Compression is done with sequences of cell images of each cell.

Sampling θ 357° appearance change of one patch θ 0° Image coherence makes high compression ratio and interpolation of appearances.

Compression key idea  Instead of storing whole sequence of images, we find a small set of new cell images (eigen cells), then represent each cell image as a linear combination of those eigen cells.  Then we store only those eigen cells and the coefficients of each cells.

At a glance… Original data Using principal vector Size = 2*N Size = N + 2

Matrix construction  Single cell image  Sequence of cell images M : # of images N : # of pixels in each cell n n

Eigen method K eigen vectors

Eigen ratio  Sort the eigen values and pick k largest eigen values.  We need all eigen values if we want to preserve original, but we need compression.

Decoding with eigen vectors scores k eigen-cells a0×a0×+ a 1 ×+ a 3 × = linear combination of base images synthesized view

Compression ratio  Size of a sequence of cell images : M x 3N  Size of k-eigen cells : k x 3N  “coefficients” of image cells : k x M  Therefore,

Cell-adaptive dimensions  Several factors influence the required dimension - Specularity, mesh precision, shadowing…  Since we compress for each sequence of cell images, we can use different dimension for each cells.  This can be done by using fixed threshold eigen ratio.

Interpolation  Synthesize from novel view point is done in the eigen-space by interpolating “coefficients” (or scores)

Integrating into the real scene  Can render scene under arbitrary illumination condition  Sample color images under several single light conditions  Synthesize the scene under approximate arbitrary illumination condition by linear combination of those base images

Shadowing

Discussion  Contribution - high compression ratio - interpolation in eigen space - global illumination effects  Drawbacks - expensive pre-computation time - limited positions. - dense mesh?

Surface Light Fields for 3D photography  Authors Daniel N. Wood, et al. (SIGGRAPH 2000)  /projects/slf/ /projects/slf/

Surface Light Field  Surface Light Field is a function that assigns a color to each ray originating from every point on the surface.  Conceptually, every points in the surface has it’s corresponding lumisphere..

Overview Data Acquisition Estimation And Compression Rendering Editing

Overview Data Acquisition Estimation And Compression Rendering Editing

Data Acquisition  Range scanning  Reconstructing the geometry  Acquiring the photographs  Registering the photographs to the geometry

Scan and reconstruct Use closest points algorithm & volumetric method by Curless and Levoy

Acquiring photographs Used Stanford spherical gantry with a calibrated camera

Register photographs to the geometry Manually selected correspondences

How to represent?  MAPS ( Multiresolution Adaptive Parameterization of Surfaces) - Base mesh + wavelets - (Aaron Lee, et al. SIGGRAPH ‘98) Base meshOriginal mesh map

Data lumishpere in each points Lumisphere Base mesh Scanned mesh

Overview Data Acquisition Estimation And Compression Rendering Editing

Estimation and Compression  Estimation problem : How do we find piece-wise linear lumisphere from given data lumisphere?  Three methods - Pointwise- fairing - Function quantization - Principal function analysis

Point-wise fairing  Estimate the least-squares best approximating lumisphere for individual surface points.  Error function = distance term + thin plate energy term  Results high quality but suffers large file size. – needs compression technique.

Point-wise fairing

Compression  Don’t want each grid point to have its own lumisphere.  Rather, want a small set of lumispheres that can be used to nicely approximate all the data lumisphere.  Standard techniques : - Vector quantization - Principal function analysis

Compression in point-wise fairing  Compression in the point-wise fairing method. - vector quantization or principal component analysis directly on their results.  Not a good idea because, - we’ve already had re-sampling step and many parts of they are fiction!  So directly manipulate data lumispheres.

Two pre-processing  Two transformation are applied to make them more compressible. - Median removal - Reflected re-parameterization

Median removal

Reflected re-parameterization

Effect of reflection Before After

Codebook of lumispheres Input data lumisphere Function quantization

 Lloyd iteration : Start with initial single codeword and a random set of training lumispheres, then repeatedly - Split and perturb codebook and repeatedly apply - Projection : find closest codeword index. - Optimization : for each cluster, find best piece-wise lumisphrer until error difference is less than some threshold. Until desirable size.

Lloyd iteration Codeword

Lloyd iteration Clone and perturb code words

Lloyd iteration Divided by several clusters

Lloyd iteration Optimize code words in the cluster

Lloyd iteration New clusters

Principal function analysis  Generalization of principal component analysis  Again we find a set of code words (prototypes), but instead of assign to each grid point, we approximate with linear combination of prototypes.

Principal function analysis Subspace of lumispheres Input data lumisphere Prototype lumisphere

Principal function analysis Approximating subspace Prototype lumisphere

Principal function analysis

Compression result Point-wise fairing FQ with 1024 codeword PFA with 2 dimension PFA with 5 dimension

Overview Data Acquisition Estimation And Compression Rendering Editing

Rendering  Basic algorithm 1. First pass : - render geometry in false color. - encode face ID and barycentric coordinates of each pixel. 2. Second pass : - scan the frame buffer and evaluate SLF with view direction.

Rendering First passSecond pass

View dependent rendering  View dependent LOD - Hoppe, et al. - Basic principle : More subdivide the “important” parts. (Add wavelets) - Three criteria 1. View frustum 2. Surface orientation 3. Screen space geometric error

Example - view dependent LOD

Overview Data Acquisition Estimation And Compression Rendering Editing

 Thanks to the decoupling of geometry and the light fields.  Three operations : - Lumisphere editing - Rotating environment - Deforming geometry

Editing - lumisphere editing  Appling simple image processing on lumisphere OriginalFiltered

Editing - rotating environment  Rotating the lumisphere OriginalRotated

Editing - deforming the geometry  Embed modified base mesh.  Compute new normals and set OriginalDeformed

Editing - limitation  It’s not always physically correct!  More correct if - environment is infinitely far away - no occlusion, shadowing or interreflection  One more problem : Our lumisphere sampling is not “complete sphere”. - needs some inference  Even though, it looks nice.

Some statistics - compression Point-wise fairing: Memory = 177 MBRMS error = 9 FQ (2000 codewords) Memory = 3.4 MBRMS error = 23 PFA (dimension 3) Memory = 2.5 MBRMS error = 24 PFA (dimension 5) Memory = 2.9 MBRMS error = ?

Some statistics - time Compute times on ~450 MHz P-III… Range scanning time: 3 hours Geometry registration: 2 hours Image to geometry alignment: 6 hours MAPS (sub-optimal): 5 hours Assembling data lumispheres: 24 hours Pointwise fairing: 30 minutes FQ codebook construction (10%): 30 hours FQ encoding: 4 hours PFA “codebook” construction (0.1%): 20 hours PFA encoding: 2 hours

Bench mark Pointwise-faired surface light field (177 MB) Uncompressed 2-plane light field (177 MB)

Bench mark Principal function analysis surface light field (2.5 MB) Vector-quantized 2-plane light field (8.1 MB)

Summary  Estimation and compression - Function quantization - Principal function analysis  Rendering - From compressed representation - View dependent LOD  Editing - Limigraph filtering and rotation - Geometry deformation

Future  Combining function quantization and principal function analysis  Wavelet representation of a surface light field  Hole filling using texture synthesis