Download presentation
Presentation is loading. Please wait.
Published byArlene Cannon Modified over 9 years ago
1
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai
2
Image-based Modeling: Challenging Scenes Why will they produce poor results? - lack of discernible features - occlusions - difficult to capture high-level structure - illumination changes - specular surfaces
3
Some Solutions - Use priors to constrain the solution space - Aid modeling process with minimal user interaction - Combine image-based modeling with other modeling approaches
4
Videos Morphable face (click here)here Image-based tree modeling (click here)here Video trace (click here)here 3D modeling by ortho-images (Click here)here
5
Spectrum of IBMR Images user input range scans Model Images Image based modeling Image- based rendering Geometry+ Images Light field Images + Depth Geometry+ Materials Panoroma Kinematics Dynamics Etc. Camera + geometry
6
Outline Layered depth image/Post-Rendering 3D WarpingPost-Rendering 3D Warping View-dependent texture mapping Light field rendering [Levoy and Hanranhan SIGGRAPH96]
7
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - image with depths
8
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - rays with colors and depths
9
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: (r,g,b,d) - image with depths - rays with colors and depths
10
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image
11
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth
12
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth
13
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth How to deal with occlusion/visibility problem?
14
Layered depth image Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Forward warping one pixel with depth How to deal with occlusion/visibility problem? Depth comparison
15
How to form LDIs Synthetic world with known geometry and texture - from multiple depth images - modified ray tracer Real images - reconstruct geometry from multiple images (e.g., voxel coloring, stereo reconstruction) - form LDIs using multiple images and reconstructed geometry Kinect sensors - record both image data and depth data
16
Image-based Rendering Using Kinect Sensors Capture both video/depth data using kinect sensors Using 3D warping to render an image from a novel view point [e.g., Post-Rendering 3D Warping]Post-Rendering 3D Warping
17
3D Warping Render an image from a novel viewpoint by warping a RGBD image. The 3D warp can expose areas of the scene for which the reference frame has no information (shown here in black).
18
Image-based Rendering Using Kinect Sensors Capture both video/depth data using kinect sensors Using 3D warping to render an image from a novel view point [e.g., Post-Rendering 3D Warping]Post-Rendering 3D Warping Demo: click herehere
19
3D Warping - A single warped frame will lack information about areas occluded in its reference frame. - Multiple reference frames can be composited to produce a more complete derived frame. How to extend to surface representation?
20
Outline Layered depth image/Post-Rendering 3D Warping View-dependent texture mapping Light field rendering
21
View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture
22
View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture
23
View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture
24
View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture
25
View-dependent texture mapping [Debevec et al 98]
26
View-dependent texture mapping Subject's 3D proxy points V C 0 C 2 C 3 C 1 0 1 D 2 3 - Virtual camera at point D - Textures from camera C i mapped onto triangle faces - Blending weights in vertex V - Angle θ i is used to compute the weight values: w i = exp(-θ i 2 /2σ 2 )
27
Videos: View-dependent Texture Mapping Demo video
28
Can we render an image without any geometric information?
29
Outline Layered depth image/Post-Rendering 3D WarpingPost-Rendering 3D Warping View-dependent texture mapping Light field rendering [Levoy and Hanranhan SIGGRAPH96]
30
Light Field Rendering Video demo: click herehere
31
Image Plane Camera Plane Light Field Capture Rendering Light Field Rendering
32
Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! it completely captures our visual reality! An image is a 2D sample of plenoptic function! P(x,y,z,θ,φ,λ,t)
33
Ray Let’s not worry about time and color: 5D 3D position 2D direction P(x,y,z, )
34
Static objectCamera No Change in Radiance Static Lighting How can we use this?
35
Static objectCamera No Change in Radiance Static Lighting How can we use this?
36
Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen
37
Synthesizing novel views Assume we capture every ray in 3D space!
38
Synthesizing novel views
39
Light field / Lumigraph Outside convex space 4D Stuff Empty
40
Light Field How to represent rays? How to capture rays? How to use captured rays for rendering
41
Light Field How to represent rays? How to capture rays? How to use captured rays for rendering
42
Light field - Organization 2D position 2D direction s
43
Light field - Organization 2D position 2 plane parameterization s u
44
Light field - Organization 2D position 2 plane parameterization u s t s,t u,v v s,t u,v
45
Light field - Organization Hold u,v constant Let s,t vary What do we get? s,tu,v
46
Lightfield / Lumigraph
47
Light field/lumigraph - Capture Idea: Move camera carefully over u,v plane Gantry >see Light field paper s,tu,v
48
Stanford multi-camera array 640 × 480 pixels × 30 fps × 128 cameras synchronized timing continuous streaming flexible arrangement
49
q For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Light field/lumigraph - rendering
50
Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u
51
Ray interpolation s u Nearest neighbor Linear interpolation in S-T Quadrilinear interpolation
52
Light fields Advantages: No geometry needed Simpler computation vs. traditional CG Cost independent of scene complexity Cost independent of material properties and other optical effects Disadvantages: Static geometry Fixed lighting High storage cost
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.