Download presentation
Presentation is loading. Please wait.
Published byGilbert O’Connor’ Modified over 9 years ago
1
Virtual Environments with Light Fields and Lumigraphs CS 497 16 April 2002 Presented by Mike Pilat
2
2 Three Papers Light Field Rendering Levoy and Hanrahan SigGraph 1996 The Lumigraph Gortler, Grzeszczuk, Szeliski and Cohen SigGraph 1996 Dynamically Reparameterized Light Fields Isaksen, McMillan and Gortler SigGraph 2000
3
3 Introduction Traditional 3D scenes are composed of geometric data and lights. Can be costly to render new views Use set of images to compute new views of scene Images are pre-acquired Solution: Image-based rendering
4
4 Image-Based Rendering Algorithms for IBR require little computational resources; suitable for PCs and workstations Cost of interactive viewing independent of scene complexity Image sources can come from: Real photographs Rendered images Both!
5
5 A long, long time ago… Environment maps 1976; Blinn: “Texture and Reflection in Computer Generated Images” 1986; Greene: “Environment Mapping and Other Applications of World Projections” They work the other way around too 1995; Chen: “QuickTime VR” Major limitation: Fixed view-point
6
6 … In a galaxy far, far away View Interpolation Many people: Chen, Greene, Fuchs, McMillan, Narayanan Not always practical Requires depth value for each pixel in the environment maps. Need to fill gaps created by occlusions. Image Interpolation Laveau, McMillan, Seitz Need to find correspondences in two images Stereo vision problem == hard Correspondences don’t always exist
7
7 Light Fields
8
8 Radiance at a point, in a particular direction Coined by Gershun 1936; Gershun: “The Light Field” Equivalent to Plenoptic Function 1991; Adelson and Bergen: “The Plenoptic Function and the Elements of Early Vision” 5D Light Fields 1995; McMillan and Bishop: “Plenoptic Modeling: An Image-Based Rendering System” Panoramic images in 3D space
9
9 Light Fields May reduce 5D representation to 4D in free space (no occluders) Radiance does not change along a line unless blocked Redundant (read: 5D) representation is bad Increased dataset size Complicates reconstruction of radiance samples Can interpret 4D light fields as “functions on the space of oriented lines” This is the representation of Levoy and Hanrahan Choose to parameterize lines by their intersection with two planes in arbitrary positions. “Light Slab” – Restrict parameters to [0,1]
10
10 Parameterization Want parameterization to have three qualities Efficient Calculation Want to use only view transform and pixel location Control over the set of lines Only need a finite subset of line space Uniform Sampling Number of lines between in intervals between samples is uniform Can generate all views from one light slab But only if set of lines include all lines intersecting the convex hull of the object Not possible, so need to use multiple slabs
11
11 Huh?
12
12 Creating Light Fields from Rendered Images Render a 2D array of images Each image is slice of 4D light slab Place center of projection of camera at locations on uv- plane Need to perform sheared perspective projection to force xy samples of the image to correspond exactly with the st samples
13
13 Creating Light Fields from Digitized Images Many problems Need hundreds or thousands of images Need to control lighting (must be static) Must manage optical constraints Angle of View Focal Distance Depth of Field Aperture Viewing Inward Looking Flying around an object Outward Looking Looking out from an object (inside a house, e.g.)
14
14 Creating Light Fields from Digitized Images Computer-controlled acquisition Spherical motion Easier to cover entire range of viewing directions Sampling rate is more uniform Planar motion Easier to build Closer to light slab representation Constructed planar gantry x,y-movement Pitch and yaw
15
15 Looking at a Light Field
16
16 Compression Desirable properties Data Redundancy Remove redundant info without affecting content Redundancy in all four dimensions Random Access Samples referenced during rendering are scattered During movement access pattern changes Asymmetry Assume light fields are pre-computed Long encoding time OK if decode is fast Computationally Efficient Can’t consume full CPU power Need CPU time for display algorithm as well Utilize two-stage compression scheme
17
17 Compression Vector Quantization (24:1) Vector of samples quantized into a predetermined reproduction vector Reproduction vector called a “codeword” “codebook” of rep-vectors used to encode data Entropy Coding (Lempel-Ziv) (5:1) Objects usually imaged on constant-intensity background Many vectors from VQ occur with high probability
18
18 Compression Two-stage decompression Decode entropy coding while loading file End up with codebook and code indices packed into 16-bit words. De-quantization Engine requests samples of light field Subscript index calculated Look up index in codebook Get vector of sample values Subscript again for requested sample Digitization Slabs4 Images/Slab32x16 Pixels/Image256x256 Total Size402 MB Compression VQ coding24:1 Entropy coding5:1 Total120:1 Comp. Size3.4 MB
19
19 Displaying a Light Field Step 1 Compute the (u,v,s,t) parameters for each image ray Step 2 Resample the radiance at those line parameters
20
20 Displaying a Light Field Step 1 Simple projective map from image coords to (u,v) and (s,t) Can use texture mapping to compute line coords u = uw/w v = vw/w (uw, vw, w) Inverse map to (u,v,s,t) only two texcoord calculations per ray Can be hardware accelerated!
21
21 Displaying a Light Field Step 2 Reconstruct radiance function from original samples Approximate by interpolation from nearest sample Prefilter (4D mip map) if image is small Quadralinear interpolation on full 4D function best Apply band pass filter to remove HF noise that may cause aliasing
22
22 Results
23
23 Results
24
24 Limitations Need high sampling density Prevents excessive blurriness Need large number of images Thanks to coherence compression is good Observer restricted to free space Can be addressed by stitching together multiple light fields based on geometry partition Requires fixed illumination If interreflections are ignore, can address by augmenting light field with surface normals and optical properties
25
25 Future Work Abstract light representations have not received the systematic study as other methods of light representation Re-examine from first principles Design better instrumentation for image acquisition Parallel array of cameras Representing light interactions in high-dimension matrices And how to compress ‘em
26
26 The Lumigraph
27
27 Lumigraphs Limit interest to light leaving the convex hull of a bounded object Only need values of plenoptic function on a surrounding surface Chose cube for computational simplicity
28
28 Parameterized Déjà Vu?
29
29 Parameterization Use same method as Levoy and Hanrahan Discrete subdivision of function space M in st, N in uv Basis Functions Basis Projection Duals of Basis functions
30
30 Using Geometric Information Use geometric information to learn about coherence of Lumigraph function Modify basis functions to weight for that coherence Use intersection depth
31
31 Creating a Lumigraph
32
32 Capturing from Synthetic Scenes Capture a single sample per Lumigraph coefficient at each grid point Place pinhole of camera at grid point Point down z axis Pixel values (p,q) are used in coefficients x i,j,p,q To integrate against kernel B~, shoot multiple rays, jitter camera and pixel locations, weight each image using B~.
33
33 Capturing from Real Images Use motion control platform from L&H Capture images at grid points Would rather use hand-held camera Place calibration marks in scene Obtain silhouette through blue-screening to get rough geometric shape Use silhouettes to build a volumetric model Guide user interactively in selecting new positions
34
34 Calibrating for Real Images
35
35 Rebinning “The coefficient associated with the basis function B is defined as the integral of the continuous Lumigraph function multiplied by some kernel function B~” Three-step approach developed Splat Pull Push
36
36 Splat, Push, Pull Splatting – estimates integral Coefficents are computed with Monte-Carlo integration Pulling – coefficients computed for hierarchical set of basis functions Linearly sum together higher-resolution kernels Pushing – information from lower grids “pushed” up to higher-res ones to fill in gaps Up sample and convolve
37
37 Reconstruction Ray Tracing Generate ray Calculate (s,t,u,v) coords Interpolate nearby grid points Texture Mapping One texture associated with each st grid point Project pixel onto uv- plane
38
38 Results
39
39 Results
40
40 Dynamically Reparameterized Light Fields
41
41 New parameterizations Interactive Rendering Moderately sampled light fields Significant, unknown depth variation Passive autostereoscopic viewing “fly’s eye” lens array Computation for synthesizing a new view comes directly from optics of display device To achieve desired flexibility, do not perform aperture filter from original paper.
42
42 Light Slabs (Again) Two-plane parameterization impacts reconstruction filters Aperture filtering removes HF data But if plane is wrong place, only stores blurred version of scene
43
43 Focal Surface Parameterization 2D array of pinhole cameras, treated as a single optical system Each data camera D s,t can have its own internal orientation and parameters Focal surface F parameterized by (f,g) F
44
44 Ray Construction Given a ray r, can find rays (s’,t’,u’,v’) and (s’’,t’’,u’’,v’’) in D s’,t’ and D s’’,t’’ that intersect at the same (f,g) F Combine values with a filter In practice, use more rays and weight the filter accordingly
45
45 Variable Aperture Real camera aperture produces depth-of-field effects Emulate depth-of-field effects by combining rays from several cameras Combine samples for each D s,t in the synthetic aperture to produce image
46
46 Variable Focus Can also create variable focus effects Multiple focal surfaces Can use one or many simultaneously Similar ray construction technique
47
47 Boring Stuff Ray-Space Analysis Examine the effects of dynamic reparameterization on light fields when viewed from ray space. Frequency Domain Analysis Shearing can arbitrarily modify the relative sampling frequencies between dimensions in ray space See paper
48
48 Rendering Ray Tracing Trace rays like before Texture Mapping If the desired camera, data cameras, and focal surface are all planar, can apply texture mapping techniques similar to Lumigraph method.
49
49 Autostereoscopic Light Fields Can think of the parameterization as a integral photograph, or the “fly’s eye view” Each lenslet treated as single pixel
50
50 View-Dependent Color Each lenslet can have view dependent color Draw a ray through the principle point of the lenslet Use paraxial approximation to determine which color will be seen from a direction
51
51 Results
52
52 Results
53
53 Results
54
54 Results
55
55 Results
56
56 Results
57
57 Future Work Develop method to exploit redundancy of light fields Develop algorithm for automatically selecting the focal plane (like camcorders) Promise for using reparameterization for depth-from-focus and depth-from-defocus problems
58
58 Fin Questions? Thanks!
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.