Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho
Eigen-Texture Method Appearance Compression based on 3D modeling Authors Ko Nishino, Yochihi Sato, Katsushi Ikuechi (CVPR 99) texture/index.html texture/index.html
Appearance based method Inputs : 3D Geometry and a set of color images Outputs : Synthesized images from arbitrary viewing points. Objective : Compressed representation of appearance of model
Eigen-Texture method Overview encoding decoding sampling
Sampling 3D geometry is captured by range camera. Photos are taken by rotating the object and registered into the geometry. Each color image is divided into small areas according to their corresponding triangles in the object. Cell : Normalized triangle patches. Cell image : Warped color image Compression is done with sequences of cell images of each cell.
Sampling θ 357° appearance change of one patch θ 0° Image coherence makes high compression ratio and interpolation of appearances.
Compression key idea Instead of storing whole sequence of images, we find a small set of new cell images (eigen cells), then represent each cell image as a linear combination of those eigen cells. Then we store only those eigen cells and the coefficients of each cells.
At a glance… Original data Using principal vector Size = 2*N Size = N + 2
Matrix construction Single cell image Sequence of cell images M : # of images N : # of pixels in each cell n n
Eigen method K eigen vectors
Eigen ratio Sort the eigen values and pick k largest eigen values. We need all eigen values if we want to preserve original, but we need compression.
Decoding with eigen vectors scores k eigen-cells a0×a0×+ a 1 ×+ a 3 × = linear combination of base images synthesized view
Compression ratio Size of a sequence of cell images : M x 3N Size of k-eigen cells : k x 3N “coefficients” of image cells : k x M Therefore,
Cell-adaptive dimensions Several factors influence the required dimension - Specularity, mesh precision, shadowing… Since we compress for each sequence of cell images, we can use different dimension for each cells. This can be done by using fixed threshold eigen ratio.
Interpolation Synthesize from novel view point is done in the eigen-space by interpolating “coefficients” (or scores)
Integrating into the real scene Can render scene under arbitrary illumination condition Sample color images under several single light conditions Synthesize the scene under approximate arbitrary illumination condition by linear combination of those base images
Shadowing
Discussion Contribution - high compression ratio - interpolation in eigen space - global illumination effects Drawbacks - expensive pre-computation time - limited positions. - dense mesh?
Surface Light Fields for 3D photography Authors Daniel N. Wood, et al. (SIGGRAPH 2000) /projects/slf/ /projects/slf/
Surface Light Field Surface Light Field is a function that assigns a color to each ray originating from every point on the surface. Conceptually, every points in the surface has it’s corresponding lumisphere..
Overview Data Acquisition Estimation And Compression Rendering Editing
Overview Data Acquisition Estimation And Compression Rendering Editing
Data Acquisition Range scanning Reconstructing the geometry Acquiring the photographs Registering the photographs to the geometry
Scan and reconstruct Use closest points algorithm & volumetric method by Curless and Levoy
Acquiring photographs Used Stanford spherical gantry with a calibrated camera
Register photographs to the geometry Manually selected correspondences
How to represent? MAPS ( Multiresolution Adaptive Parameterization of Surfaces) - Base mesh + wavelets - (Aaron Lee, et al. SIGGRAPH ‘98) Base meshOriginal mesh map
Data lumishpere in each points Lumisphere Base mesh Scanned mesh
Overview Data Acquisition Estimation And Compression Rendering Editing
Estimation and Compression Estimation problem : How do we find piece-wise linear lumisphere from given data lumisphere? Three methods - Pointwise- fairing - Function quantization - Principal function analysis
Point-wise fairing Estimate the least-squares best approximating lumisphere for individual surface points. Error function = distance term + thin plate energy term Results high quality but suffers large file size. – needs compression technique.
Point-wise fairing
Compression Don’t want each grid point to have its own lumisphere. Rather, want a small set of lumispheres that can be used to nicely approximate all the data lumisphere. Standard techniques : - Vector quantization - Principal function analysis
Compression in point-wise fairing Compression in the point-wise fairing method. - vector quantization or principal component analysis directly on their results. Not a good idea because, - we’ve already had re-sampling step and many parts of they are fiction! So directly manipulate data lumispheres.
Two pre-processing Two transformation are applied to make them more compressible. - Median removal - Reflected re-parameterization
Median removal
Reflected re-parameterization
Effect of reflection Before After
Codebook of lumispheres Input data lumisphere Function quantization
Lloyd iteration : Start with initial single codeword and a random set of training lumispheres, then repeatedly - Split and perturb codebook and repeatedly apply - Projection : find closest codeword index. - Optimization : for each cluster, find best piece-wise lumisphrer until error difference is less than some threshold. Until desirable size.
Lloyd iteration Codeword
Lloyd iteration Clone and perturb code words
Lloyd iteration Divided by several clusters
Lloyd iteration Optimize code words in the cluster
Lloyd iteration New clusters
Principal function analysis Generalization of principal component analysis Again we find a set of code words (prototypes), but instead of assign to each grid point, we approximate with linear combination of prototypes.
Principal function analysis Subspace of lumispheres Input data lumisphere Prototype lumisphere
Principal function analysis Approximating subspace Prototype lumisphere
Principal function analysis
Compression result Point-wise fairing FQ with 1024 codeword PFA with 2 dimension PFA with 5 dimension
Overview Data Acquisition Estimation And Compression Rendering Editing
Rendering Basic algorithm 1. First pass : - render geometry in false color. - encode face ID and barycentric coordinates of each pixel. 2. Second pass : - scan the frame buffer and evaluate SLF with view direction.
Rendering First passSecond pass
View dependent rendering View dependent LOD - Hoppe, et al. - Basic principle : More subdivide the “important” parts. (Add wavelets) - Three criteria 1. View frustum 2. Surface orientation 3. Screen space geometric error
Example - view dependent LOD
Overview Data Acquisition Estimation And Compression Rendering Editing
Thanks to the decoupling of geometry and the light fields. Three operations : - Lumisphere editing - Rotating environment - Deforming geometry
Editing - lumisphere editing Appling simple image processing on lumisphere OriginalFiltered
Editing - rotating environment Rotating the lumisphere OriginalRotated
Editing - deforming the geometry Embed modified base mesh. Compute new normals and set OriginalDeformed
Editing - limitation It’s not always physically correct! More correct if - environment is infinitely far away - no occlusion, shadowing or interreflection One more problem : Our lumisphere sampling is not “complete sphere”. - needs some inference Even though, it looks nice.
Some statistics - compression Point-wise fairing: Memory = 177 MBRMS error = 9 FQ (2000 codewords) Memory = 3.4 MBRMS error = 23 PFA (dimension 3) Memory = 2.5 MBRMS error = 24 PFA (dimension 5) Memory = 2.9 MBRMS error = ?
Some statistics - time Compute times on ~450 MHz P-III… Range scanning time: 3 hours Geometry registration: 2 hours Image to geometry alignment: 6 hours MAPS (sub-optimal): 5 hours Assembling data lumispheres: 24 hours Pointwise fairing: 30 minutes FQ codebook construction (10%): 30 hours FQ encoding: 4 hours PFA “codebook” construction (0.1%): 20 hours PFA encoding: 2 hours
Bench mark Pointwise-faired surface light field (177 MB) Uncompressed 2-plane light field (177 MB)
Bench mark Principal function analysis surface light field (2.5 MB) Vector-quantized 2-plane light field (8.1 MB)
Summary Estimation and compression - Function quantization - Principal function analysis Rendering - From compressed representation - View dependent LOD Editing - Limigraph filtering and rotation - Geometry deformation
Future Combining function quantization and principal function analysis Wavelet representation of a surface light field Hole filling using texture synthesis