Presentation is loading. Please wait.

Presentation is loading. Please wait.

Parameterized Environment Maps

Similar presentations


Presentation on theme: "Parameterized Environment Maps"— Presentation transcript:

1 Parameterized Environment Maps
Ziyad Hakura, Stanford University John Snyder, Microsoft Research Jed Lengyel, Microsoft Research In this talk, I introduce parameterized environment maps, a representation for accurate, real-time rendering of shiny objects.

2 Static Environment Maps (EMs)
Generated using standard techniques: Photograph a physical sphere in an environment Render six faces of a cube from object center Traditional, or static, environment maps achieve a reasonable approximation of reflections and are easily supported in hardware. Note that we use the abbreviation EM for environment maps. They are constructed using standard techniques, such as by taking a photograph of a physical sphere in an environment, or rendering six faces of cube from an object center.

3 Ray-Traced vs. Static EM
Self-reflections are missing Unfortunately, static environment maps fail to accurately reproduce local reflections. This is because the reflector is approximated as a point and the reflected environment as infinitely distant. Notice the missing self-reflections in this image generated using a static environment map.

4 Here we compare the original ray traced sequence on the left with a sequence generated using a static environment map on the right. Note the missing self-reflections of the teapot spout and knob on the body and lid.

5 Parameterized Environment Maps (PEM)
A parameterized environment map is a sequence of environment maps recorded over a set of viewpoints. In this example, we record a sequence of environment maps over a one-dimensional viewspace.

6 3-Step Process 1) Preprocess: Ray-trace images at each viewpoint
2) Preprocess: Infer environment maps (EMs) 3) Run-time: Blend between 2 nearest EMs Creating and using parameterized environment maps is a 3-step process. In the first step, we ray-trace images of the reflective object from a sequence of sample viewpoints. In the second step, we infer a separate environment map to match each ray-traced image. Note that unlike traditional environment maps, these environment maps are specific to the reflective object for which they are inferred. In the third step, we reconstruct an image from a particular viewpoint by blending between the two nearest environment maps.

7 Environment Map Geometry
An environment maps’ geometry refers to how it approximates the reflecting environment. For example, an environment can be represented by a sphere at infinity, by a finite cube, or by a finite hemisphere with planar bottom.. By picking an environment map geometry that closely matches the actual environment, we obtain better predictions of how reflections move as the view changes. At the same time, we limit ourselves to simple geometry, such as cubes, spheres and ellipsoids, for fast ray intersections. In this diagram, we show how environment map texture coordinates are generated for an individual vertex. This is done by intersectiing the reflection ray with the simple geometry that approximates the environment, in this case an ellipsoid. We then map this intersection point using the environment texture mapping to find its (u,v) texture coordinates.

8 Why Parameterized Environment Maps?
Captures view-dependent shading in environment Accounts for geometric error due to approximation of environment with simple geometry There are two benefits from parameterizing environment maps. First, we can capture view-dependent shading on specular objects in the environment. For example, say the teapot is reflecting an image of a cup, and this cup is itself reflective, then we can capture the view-dependent shading on the cup using multiple environment maps. Second, we can account for geometric error caused by approximating the environment with simpler geometry. We compute environment maps by inferring them to match a ray traced image rather than by rendering from the object center. Doing this at each viewpoint, we obtain a good match that compensates for geometric error.

9 How to Parameterize the Space?
Experimental setup 1D view space 1˚ separation between views 100 sampled viewpoints In general, author specifies parameters Space can be 1D, 2D or more Viewpoint, light changes, object motions In our experiments, we used a one-dimensional view space containing a total of 100 sampled viewpoints separated by 1 degree. In general, the content author specifies how to parameterize the space. The space can have more than one dimension, such as for example two view dimensions instead of just one. Furthermore, the space can be parameterized by parameters other than viewpoint, such as light changes and object motions.

10 Closely match local reflections like self-reflections
Ray-Traced vs. PEM Closely match local reflections like self-reflections Using parameterized environment maps, we are able to closely match local reflections like self-reflections evident in each ray-traced image

11 Here we compare the original ray traced sequence on the left with the approximating PEM sequence on the right. Notice the self-reflections of the teapot spout and knob on the body and lid.

12 Movement Away from Viewpoint Samples
Ray-Traced PEM Furthermore, we can move the viewpoint away from the ray-traced samples, plausibly maintaining these local reflection effects. [Pause]

13 We show the effectiveness of PEMs in moving off the manifold.
Here, we interpolate between samples viewpoints along the 1-D circle of viewpoints. Here, we move above and below the plane of sample viewpoints. We move closer and farther from the reflecting object. Finally, we take a path that combines horizontal and vertical motion.

14 Previous Work Reflections on Planar Surfaces [Diefenbach96]
Reflections on Curved Surfaces [Ofek98] Image-Based Rendering Methods Light Field, Lumigraph, Surface Light Field, LDIs Decoupling of Geometry and Illumination Cabral99, Heidrich99 Parameterized Texture Maps [Hakura00] There has been much previous work on efficient techniques to produce realistic reflections. listed here. [Pause] I will describe in detail differences with surface light fields and parameterized texture maps to make clear what our contribution is.

15 Surface Light Fields [Miller98,Wood00]
PEM Dense sampling over surface points of low-resolution lumispheres Sparse sampling over viewpoints of high-resolution EMs Surface light fields represent the radiance field as a dense sampling over surface points of low-resolution lumispheres. Our approach instead shares a single high-resolution environment map over all surface points, parameterized over a sparse sampling of views. Because lumispheres have 2 to 3 order of magnitude fewer samples than typical environment maps, it is not surprising that current results for surface light fields demonstrate only blurry highlights. As I have just shown, we achieve mirror-like reflections with our approach. Furthermore, surface light fields require an irregular scatting of samples over the entire 4D space to reconstruct an image. Our approach accesses the single environment map appropriate for a given view, or perhaps a few closest ones, thus achieving much more spatial coherence. <diagram>

16 Parameterized Texture Maps [Hakura00]
Light View Captures realistic pre-rendered shading effects Last year, we introduced parameterized texture maps that record a texture map inferred from offline ray-traced imagery over a set of parameters like viewpoint, light changes, and object motions. This example illustrates a 2D space representing a 1D viewspace combined with a 1D light swinging motion to produce realistic textures for a glass goblet.

17 Comparison with Parameterized Texture Maps
Parameterized Texture Maps [Hakura00] Static texture coordinates Pasted-on look away from sampled views Parameterized Environment Maps Bounce rays off, intersect simple geometry Layered maps for local and distant environment Better quality away from sampled views Because parameterized texture maps use static texture coordinates, we get a pasted-on look when we move away from sampled views. In contrast, with parameterized environment maps, we compute texture coordinates on-the-fly by intersecting the rays that bounce off the surface with simple geometry that approximates the environment. In addition, we distinguish between local and distant elements in the environment and store them in separate layers to get parallax. Consequently, we achieve better quality away from the sampled views.

18 Compare PEMs on the left with PTMs on the right over the same off-the-manifold viewpoint trajectory.
Notice the popping artifact and pasted-on look for parameterized texture maps.

19 EM Representations EM Geometry EM Mapping
How reflected environment is approximated Examples: Sphere at infinity Finite cubes, spheres, and ellipsoids EM Mapping How geometry is represented in a 2D map Gazing ball (OpenGL) mapping Cubic mapping It is important to distinguish the geometry of an environment map from its mapping. As I mentioned earlier, an environment maps’ geometry refers to how it approximates the reflecting environment. An environment map’s mapping, on the other hand, refers to how the geometry is represented in a 2D map. Examples include the gazing ball and cubic mapping. We use the cubic mapping which avoids any singularities which may become visible from non-sampled views.

20 Layered EMs Segment environment into local and distant maps
Allows different EM geometries in each layer Supports parallax between layers We segment the environment into separate maps for local and distant elements. The separation allows different EM geometries to be used to better approximate each layer, and supports parallax between layers. In our work, we use a tightly bounding ellipsoid to represent self-reflections. The more distant environment is represented as a cube.

21 Segmented, Ray-Traced Images
Distant Local Color Local Alpha Fresnel EMs are inferred for each layer separately The ray tracer segments images into local and distant layers. Environment maps are inferred for each layer separately. I will now discuss each of the layers in more detail.

22 Distant Layer Ray directly reaches distant environment
The distant layer is constructed in the ray-tracer from rays that immediately reach the distant environment, as shown in the diagram on the right.

23 Distant Layer Ray bounces more times off reflector
However, it is possible for rays that bounce off the object, to hit the object again after their first bounce. In this example, a ray hits the teapot 2 more times after the first bounce, by reflecting off the spout and teapot body.

24 Distant Layer Ray propagated through reflector
For the distant layer, we ignore secondary bounces off the reflective object by allowing rays to pass through the object after their first bounce. In this case, a incoming ray bounces off the body, and ignores the presence of the spout to reach the distant environment.

25 Local Layer Local Color Local Alpha
The local layer is constructed from rays that bounce off the reflective object more than once before reaching the distant environment. The local layer’s image is a 4-channel image with transparency, because not all rays hit the teapot more than once. This transparency is encoded in the resulting environment map, so that we can see through the local layer to the distant geometry where the local geometry is absent.

26 Fresnel Layer Fresnel modulation is generated at run-time
The ray-tracer keeps the highly view-dependent fresnel modulation separate from the incoming radiance. We generate the fresnel modulation at run-time using a simple formula.

27 HW Filter Coefficients
EM Inference A x = b Unknown EM Texels Ray-Traced Image HW Filter Coefficients Hardware Render Screen EM Texture Our inference approach is based on the observation that a texel contributes to zero or more display pixels. Neglecting quantization effects, a texel that is twice as bright contributes twice as much. So we can model the hardware as a linear system. Ax = b, where matrix A represents the hardware filter coefficients mapping texels to display pixels, vector x represents the environment map to be solved for, and vector b represents the ray-traced image to be matched. We determine elements of the matrix A by performing test renderings on the hardware that isolate the contribution of each texel. In this simplified diagram, one texel in an environment map is set to one and all others set to zero. On the right is shown the corresponding impulse response on the screen when the lens object is rendered with this environment map. Considering that the matrix A is sparse, we can solve for x using conjugate gradient.

28 Inferred EMs per Viewpoint
Distant Local Color Alpha Here we show environment maps inferred for one viewpoint. These environment maps consist of 6 mip-mapped images since we use a cubic mapping. Though we have inferred parameterized environment maps for the distant layer, we can actually get away with using a static non-parameterized environment map for this layer in this example, since the distant environment happens to be diffuse and far from the teapot.

29 Run-Time “Over” blending mode to composite local/distant layers
Fresnel modulation, F, generated on-the-fly per vertex Blend between neighboring viewpoint EMs Teapot object requires 5 texture map accesses: 2 EMs (local/distant layers) at each of 2 viewpoints (for smooth interpolation) and 1 1D Fresnel map (for better polynomial interpolation) At run-time, we use the “over” blending mode to composite the local layer, subscript L, over the distant, subscript D, before modulating by the fresnel term, F. We use multi-pass rendering to assemble the shading layers. A purely reflective object, such as the teapot, in a 1D viewspace requires 5 texture map accesses, 2 EMs for the local/distant dual at each of 2 viewpoints for smooth interpolation, and 1 access to a 1D map for better interpolation of the high-degree polynomial involved in the fresnel term.

30 Video Results Experimental setup 1D view space
1˚ separation between views 100 sampled viewpoints I will now show video results for this technique.

31 Layered PEM vs. Infinite Sphere PEM
We first compare layered parametered environment maps with a simpler version that approximates the environment with a single sphere-at-infinity.

32 Notice the wobble in the reflection of the knob on the lid.
This is due to the approximation of the local environment as infinitely-distant in the infinite-sphere PEM.

33 Real-time Demo We now show a sequence captured in real-time from our viewer.

34 Our viewer achieves an average of 20 frames per second with blending between adjacent viewpoints on a 700 MHz PC with Nvidia GeForce graphics accelerator.

35 Summary Parameterized Environment Maps Accounts for environment’s
Layered Parameterized by viewpoint Inferred to match ray-traced imagery Accounts for environment’s Geometry View-dependent shading Mirror-like, local reflections Hardware-accelerated display In summary, we have introduced parameterized environment maps. This representation is layered, parameterized by viewpoint, and is inferred to match ray-traced images. The advantage of this representation is that it accounts for the environment’s geometry and view-dependent shading. It is well-suited for rendering mirror-like local reflections. Furthermore, this representation is easily supported in present-day graphics hardware, and has the desirable property of requiring only spatially coherent memory accesses.

36 Future Work Placement/partitioning of multiple environment shells
Automatic selection of EM geometry Incomplete imaging of environment “off the manifold” Refractive objects Glossy surfaces In future work, we are interested in further exploring the use of multiple environment map shells to better approximate the environment, including finding the optimal partitioning and placement of such shells. We also seek automatic selection of environment map geometry. Another area of future work is handling disocclusions where the reflector fails to image parts of its environment that are revealed in nearby views. This can happen if parts of the reflector are occluded or if its normals incompletely cover the sphere. Finally, we would like to apply these methods to refractive objects and glossy surfaces.

37 Questions

38 Timing Results On the Manifold Off the Manifold 2 3 texgen time 35ms
On the Manifold Off the Manifold #geometry passes 2 3 texgen time 35ms 35ms frame time 45ms 57ms FPS 22 17.5

39 Texel Impulse Response
Hardware Render Screen Texture To measure the hardware impulse response, render with a single texel set to 1. We use the hardware itself to characterize the mapping of the textured object through the 3D hardware. To capture the impulse response of a single texel, we set it’s value to 1 and render using the same graphics operations that the decoder uses. In this diagram, the size of the solid circle represents the intensity of the pixel value.

40 Single Texel Response Hardware rendering with a 1 in the texture map produces a set of filter coefficients for the given texel.

41 Model for Single Texel one column per texel one row per screen pixel
Reading the filter coefficients back from the screen creates a single column in the A matrix.

42 Model for MIPMAPs

43 Conclusion PEMs provide: faithful approximation to ray-traced
images at pre-rendered viewpoint samples plausible movement away from those samples using real-time graphics hardware Parameterized environment maps provide faithful approximation to ray-traced images at pre-rendered viewpoint samples. In addition, we have demonstrated plausible movement away from those samples using real-time graphics hardware.

44 PEM vs. Static EM We compare PEMs on the left with a static environment map on the right.

45 PEMs achieve a more realistic result.


Download ppt "Parameterized Environment Maps"

Similar presentations


Ads by Google