View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps Q. How to fill in holes?
View interpolation from multiple views 1. Render object from multiple viewpoints 2. Convert Z-buffers to range images 3. Re-render from new viewpoint 4. Use depths to resolve overlaps 5. Use multiple views to fill in holes
Problems with view interpolation resampling the range images –block moves + image interpolation (Chen and Williams, 1993) –splatting with space-variant kernels (McMillan and Bishop, 1995) –fine-grain polygon mesh (McMillan et al., 1997) missed objects –interpolate from available pixels –use more views (from Chen and Williams)
More problems with view interpolation Obtaining range images is hard! –use synthetic images (Chen and Williams, 1993) –epipolar analysis (McMillan and Bishop, 1995) cylindrical epipolar geometry epipolar geometry
2D image-based rendering advantages –low computation compared to classical CG –cost independent of scene complexity –imagery from real or virtual scenes limitations –static scene geometry –fixed lighting –fixed-look-from or look-at point Flythroughs of 3D scenes from pre-acquired 2D images
Apple QuickTime VR outward-looking –panoramic views at regularly spaced points inward-looking –views at points on the surface of a sphere
A new solution: rebinning old views must stay outside convex hull of the object like rebinning in computed tomography
A light field is an array of images
Spherical 4-DOF gantry for acquiring light fields –0.03 degree positioning error (1mm) –0.01 degree aiming error (1 pixel) –can acquire video while in motion
Light field video camera
1st generation prototype
2nd generation prototype