Download presentation
Presentation is loading. Please wait.
Published byAlysa Hunton Modified over 9 years ago
1
Efficient Global Illumination for Dynamic Scenes Takehiro Tawara MPI Informatik, Saarbruecken, Germany
2
Problem Statement ●In the traditional rendering algorithms, every frame is considered separately. ●The temporal coherence is poorly exploited Redundant computations ●The visual sensitivity to temporal detail cannot be properly accounted for Too conservative stopping conditions Temporal aliasing
3
Outline ●Related Work ●Background ●Temporally-coherent Rendering Techniques: –Static Scenes (Walkthroughs) ●Ray Tracing & IBR –Dynamic Scenes ●Mesh-based Density Estimation ● Photon Mapping, Final Gathering & Irradiance Cache – Efficient Handling of Strong Secondary Lighting ● Conclusions
4
Related Work ●Progressive radiosity: –Chen ’ 90, George et al. ’ 90, ●Hierarchical radiosity: –Pueyo et al. ’ 97, Drettakis and Sillion ’ 97, Schöffel and Pomi ’ 99, ●Instant radiosity: –Keller ’ 97 ●Space-time hierarchical radiosity: –Damez ’99, Martin et al. ’03 ●Global Monte Carlo radiosity: –Besuievsky and Sbert ’01 ●Range-image framework: –Nimeroff et al. ’96 ●Density Estimation –Dmitiriev et al. ’02 ●Bi-Directional Path Tracing –Havran et al. ’03 ●Perception-based / RADIANCE: –Yee and Pattanaik ’01 ●Stochastic Ray Tracing –Meyer and Anderson ’06
5
Background ●3D Warping and Pixel Flow ●Density Estimation Particle Tracing ●Photon Mapping ●Animation Quality Metric (AQM)
6
Exploiting Temporal Coherence in Walkthrough Rendering
7
Overview ●Animation rendering solution: a hybrid of standard ray tracing and Image-Based Rendering (IBR) techniques. –Use ray tracing to compute all key frames and selected glossy and transparent objects. –For inbetween frames, derive as many pixels as possible using computationally inexpensive IBR techniques. ●Animation quality enhancement: spatio- temporal antialiasing solution.
8
Selected case study scenes ●Interesting occlusion relationships between objects which are challenging for IBR. ●Many specular objects for the atrium scene. ●Animation path causing great variations of the pixel flow for the room scene.
9
In-between frame generation MakeInbetweenFrames(k 0,k 2N ) ●k N ’ = 3DWarp(k 0 ) k N ” = 3DWarp(k 2N ); ●Mask out pixels: low PF, bad specular, IBR occlusion ●if (AQM(k N ’, k N ”) > t) MakeInbetweenFrames(k 0, k N ) MakeInbetweenFrames(k N, k 2N ) ●Else For(k 1 to k 2N-1 ) Composite(k 0, k 2N )
10
IBR-derived pixels to be ray traced ●Pixels representing specular objects selected by the AQM predictions for recomputation. ●Pixels with occlusion problems inherent to IBR techniques. ●Pixels for slowly moving visual patterns, which are selected based on the Pixel Flow magnitude. The threshold velocity was found experimentally using subjective and objective (AQM) judgement of the resulting animation quality. ●Totally, less than a half percentage of pixels is computed by ray tracing (Atrium: 49.5 %, Room: 35.1%).
11
Spatio-temporal antialiasing ●3D low-pass filtering in the spatio-temporal domain is performed as a post-process on the complete animation sequence. ●Motion-compensated filtering is performed in the temporal domain (this is another application of the Pixel Flow derived as a by-product of IBR computations). ●To our experience, for moving visual patterns a single ray-traced sample per pixel is enough to produce an animation which is visually indistinguishable from its counterpart based on supersampled images.
12
Examples of final frames Adaptively Supersampled frame used in traditional animations Corresponding frame derived using our approach In both cases the perceived quality of animation seems to be similar! Speedup x8.3
13
Perception-Guided Global Illumination Solution for Animation Rendering
14
Focus ●Indirect lighting in animated sequences –Quite costly to compute –Usually changes slowly and smoothly both in the temporal and spatial domains
15
Temporal photon processing: contradictory requirement ●Maximize the number of photons collected in the temporal domain to reduce the stochastic noise. ●Minimize the time interval in which the photons were traced to avoid collecting invalid photons. Static object Moving object
16
Temporal photon processing: our solution ●Energy-based stochastic error metric –Decides the number of frames to collect photons in temporal domain –Computed for each mesh element and for all frames –We assume that hitting a mesh element by photons can be modeled by the Poisson distribution. ●Perception-based animation quality metric –Decides the number of photons per frame –Computed once per animation segment
17
Algorithm 1.Initialization: determine the initial number of photons per frame. 2.Adjust the animation segment length depending on temporal variations of indirect lighting which are measured using energy-based criteria. 3.Adjust the number of photons per frame based on the AQM response to limit the perceivable noise. 4.Spatio-temporal reconstruction of indirect lighting. 5.Spatial filtering step.
18
Temporal processing Off 25,000 photons/frame On 10,000-40,000 photons/frame
19
Timings [seconds] Timings of the indirect lighting computation for a single frame obtained as the average cost per frame for the whole animation (800 MHz Pentium III processor).
20
Localizing the Final Gathering for Dynamic Scenes using the Photon Map
21
Indirect Illumination L i ●We separate the computation of L i as a function of dynamic changes in lighting: –Rapidly changing indirect illumination L y : ● Computed for scene regions strongly affected by dynamic objects, ● Exact computation repeated for each frame. –Slowly changing indirect illumination L t : ● Computed for the remaining scene regions, ● Reused information for an animation segment with more relaxed update of dynamic lighting component for each frame.
22
Photon Maps ● We store photons into: – Static photon map ● Estimate illumination when dynamic objects are removed from the scene, ● Computed once per animation segment. – Global photon map (commonly used) ● Estimate illumination for the complete scene, ● Computed for each frame. – Dynamic photon map ● Estimate the indirect illumination contributed only from dynamic objects, ● Computed for each frame.
23
Tracing Dynamic Photons ● Dynamic photon map is built simultaneously with the global photon map – It stores only the so-called dynamic photons which intersect with dynamic objects at least once. – The photon hit points are stored for diffuse surfaces only. – Photons with negative energy are possible in the regions occluded by dynamic objects. Dynamic objects
24
Static and Dynamic Irradiance Caches ● Static irradiance cache is based on the static photon map – Computed only once for an animation segment, – Cache positions are the same for all frames, – Updated for each frame using the dynamic photon map –Used to compute L t ● Dynamic irradiance cache is computed using the global photon map for the current frame – Recomputed for each frame for selected regions –Used to compute L y Static irradiance cache Dynamic irradiance cache
25
Determining L y and L t Scene Regions ●Influence I of dynamic objects is computed using the dynamic photon map: ●Indirect illumination L i :
26
Influence I Slowly changing indirect illumination L t Rapidly changing indirect illumination L y Full global illumination L r
27
Animation Rapidly changing indirect illumination L y Full global illumination L r
28
Results ● Our method – Recomputes 3 - 4 times less irradiance samples per frame, – Speeds up the computation 1.4 - 3.2 in respect to the frame-by-frame approach, – Improves the overall animation quality by reducing the flickering of reconstructed indirect lighting.
29
Exploiting Temporal Coherence in Final Gathering for Dynamic Scenes
30
Motivation ●Final gathering is necessary to render a high quality global illumination animation. ●For a dynamic environment, final gathering is repeated from scratch for every frame. –A long computation time –Stochastic noise can be easily perceived in an animation. ●To solve the both problems, we exploit temporal coherence. –We store incoming radiance samples and their information is shared for the neighboring animation frames.
31
Cache Data Structure ● At each cache location 200 – 1,000 directions are sampled. ● For each direction, incoming radiance, distance to the nearest intersection point and a flag are stored (total 8 bytes). ● Cache locations are kept in memory as a kd-tree structure and sampled incoming data is stored in a hard disk. struct IncomingRadiance Sample { RGBE Li; float16 Di; ushort flag; };
32
Temporally Coherent Gathering: Random Permutation with Non-uniform Probabilities ●A random integer X [0, T) is mapped to the corresponding cell. ●After the selected cell (the shaded area) is removed, a new CDF (the bold dashed line) is rebuilt. This image illustrates our temporally coherent gathering algorithm for three frames. The grid depicts 16 stratified sampling directions in the upper hemisphere over a cache location. The lower row shows the corresponding cumulative distribution function (CDF) which is used to select a sampling direction.
33
The Number of Refreshing Rays ●Fixed number (e.g. 10% of the gathering rays) –Statistically all cells should be refreshed after 10 frames. –About 10 times faster computation Reference FixedAdaptive ●Adaptive number based on the number of gathering rays hitting on dynamic objects
34
Cache Locations ●A new cache will be automatically inserted. ●Examine redundancy by the nearest neighbor search. Removing the redundant caches Without removing the redundant caches
35
Moving a Light Source Indirect Full Speedup x5.2 for indirect illumination
36
Frame-by-frame Our method Speedup x9.1 for indirect illumination
37
Distribution of Incoming Radiance Samples over the Hemisphere a) Frame-by-frame computation b) 10 % of samples is refreshed for each frame according to the aging criterion a)b) Correspondence of Irradiance
38
Efficient Rendering of Strong Secondary Lighting in Photon Mapping Algorithm
39
Noise Reduction Techniques Variance reduction techniques – Stratified sampling – Importance sampling – Separation of an integrand Importance sampling based on a BRDF – Easy for glossy surfaces – Difficult for diffuse surfaces
40
Overview ●Global grid structure ●Split a global photon map into: –Low-energy photon map ●Stratified sampling –High-energy photon map ●Explicit sampling toward bright regions
41
Algorithm ●Global grid, in which each voxel has a counter which is the number of photons hitting on a surface in the voxel. ●During a photon tracing phase: –If (counter <= c max ) ●The photon is stored in a low-energy photon map. –If (counter > c max ) ●The photon is stored in a high- energy photon map.
42
High-Energy Photon Map ●Distribution of photon hit points in the high- energy photon map. ●The black dots in the upper left region around the primary light source represent photons from this map.
43
Reflected Radiance L h ● L h : Reflected radiance for a high-energy photon map ● M : Set of brighter voxels ● f : BRDF ● V : Visibility function (1: visible, 0: otherwise) ● dE h : Differential irradiance from a voxel
44
●Size: 320 x 240 pixels ●a) 768 stratified samples / pixel –Rendering time: 21 min. ●b) 278 samples / pixel –48 stratified samples –230 explicit samples –Rendering time: 9 min. Scene 1 a) b)
45
Scene 2 ●Size: 1,128 x 480 pixels ●398 samples / cache –300 stratified samples –98 explicit samples ●13,666 caches ●Rendering time: 10 min.
46
Conclusions ●We presented a number of global illumination algorithms that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. ●Our strategy relied on extending into temporal domain global illumination and rendering techniques such as density estimation path tracing, photon mapping, ray tracing, and irradiance caching, which were originally designed to handle static scenes only. ●Our solutions led to significant improvements of the computation performance and animation quality through the suppression of temporal aliasing.
47
Summary
48
Appendix
49
Results: Statistics and Timings Average computation time / frame Specular pixels 40.8% Slow motion 2.4% IBR occlusions 0.3% Keyframes 6.0% -------------------------------- Total 49.5% Slow motion 28.1% IBR occlusions 1.9% Keyframes 5.1% ------------------------------ Total 35.1% AtriumRoom Percentage of pixels to ray trace
50
Results: photon collection for each mesh element Pixels with the AQM predicted perceivable differences [%] Number of frames in an animation segment Fixed Adaptive
51
Results (Timings) Tpt – photon tracing and precomputation of irradiance (sec/frame) Td – direct illumination Ti – indirect illumination T – total time, i.e. T = Tpt + Td + Ti
52
Results (HD storage and Errors) ●The storage requirements weakly depend on the frame resolution since the irradiance cache data is stored in the object space. ●The visual quality of an animation produced by our method is better than for the reference solution because temporal flickering is significantly reduced. RMS Error in respect to the reference animation for the BOX scene The size of the irradiance cache: N – the number of gathering rays, #E – the number of irradiance values
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.