Presentation is loading. Please wait.

Presentation is loading. Please wait.

Thank you for the introduction

Similar presentations


Presentation on theme: "Thank you for the introduction"— Presentation transcript:

1 Thank you for the introduction
Thank you for the introduction. Mahdi apologizes but he could not get his visa, so I'm replacing him. Mahdi M. Bagher / Cyril Soler / Kartic Subr / Laurent Belcour / Nicolas Holzschuch Interactive rendering of acquired materials on dynamic geometry using bandwidth prediction

2 Shading Direct reflection under distant illumination with no visibility and global illumination effects Rendering an image is computationally expensive. from Wikipedia This paper is about shading, that is computing the amount of illumination arriving at each visible surface point in a virtual scene. That involves computing an integral over all the incident directions for every image pixel. Rendering an image is therefore computationally expensive. Illumination BRDF

3 Acquired Materials Using measured materials photo-realistic rendering
large memory footprint (33MB for a single isotropic BRDF) Not suitable for real-time rendering We would like to do shading using realistic materials, for example measured reflectance. These measured materials greatly improve the realism of pictures, but they have a large memory footprint, and are not easy to use in real-time rendering. Phong shading Measured “Color-Changing-Paint-3” BRDF measurement Gantry [Matusik2002]

4 BRDF importance sampling
Monte Carlo Sampling Monte Carlo approximation to the shading integral Importance sampling less noisy Data driven reflectance no analytical importance function pre-computed Importance samples One thing we can do is take many random samples in the space of incident directions to approximate the shading integral. But this operation is costly since the BRDF and illumination evaluations are costly by themselves. And Monte Carlo requires many samples to converge. We can reduce the number of samples and therefore the computation times using Importance sampling. Importance samples for a given measured material has to be pre-computed, since there are no analytical importance functions available for them. This makes the use of measured materials for fast rendering even more difficult. Monte Carlo sampling BRDF importance sampling

5 Sampling Dealing with two types of sampling Reconstruction Integration
sampling in image space to render an image Integration sampling in the space of incident directions to shade each individual pixel To be clear, we are dealing with two types of sampling. Sampling for reconstruction, which is sampling in image space to render an image. And, sampling for integration, which is sampling in the space of incident directions to shade each individual pixel. Illumination BRDF Cosine

6 The Problem Shading is slow… How to speed-up shading?
So, to summarize our problem: Shading is slow! How can we speed it up?

7 Measured reflectance data from [Matusik2002]
Case Study What difference various materials make in rendering an image? As an example, let’s see what difference various materials make. In these pictures, you can see different materials from the MERL database. If you look at them closely, you will identify two kind of behavior. Measured reflectance data from [Matusik2002]

8 Measured Teflon [Matusik2002]
Diffuse Materials Diffuse materials shading varies slowly across the image (low frequencies) many integration samples for each pixel (wide BRDF lobe) First, for diffuse materials, shading varies very slowly. We can exploit this coherency by shading less pixels in the image and interpolating. By looking at the diffuse BRDF lobe, on the right image, many integration samples are needed to compute the shading integral, because we need samples in all directions. Now, let’s take a look at a specular material… BRDF Lobe [BRDFLab] Measured Teflon [Matusik2002]

9 Measured color-Changing-Paint3 [Matusik2002]
Specular Materials Specular materials shading varies quickly across the image (high frequencies) few integration samples for each pixel (narrow BRDF lobe) Second, for specular materials, shading varies quickly across the image, and requires many reconstruction samples. But, since the BRDF has a very narrow lobe in a preferred direction, taking few samples in that direction is enough to estimate the shading integral. Measured color-Changing-Paint3 [Matusik2002] BRDF Lobe [BRDFLab]

10 The Problem Rendering is slow… How to speed-up shading?
Can we exploit the coherency between Reconstruction samples… Integration samples… …for adaptive sampling? How to measure the coherency between samples? So, in order to speed-up shading, can we exploit the coherency between reconstruction samples, as well as integration samples for adaptive sampling? Short answer is yes, we can. But in order to do that, we have to know how to measure the coherency between samples for any given scene.

11 Intuition Study the frequencies in the image
As a function of what happens to the light Traveling from the light source to the camera Our intuition is that we should study the frequencies in the image as a function of what happens to the light, traveling in the scene from the light source to the camera.

12 The Traveling Light Light when traveling is affected by
Travel through free space Object’s curvature Reflectance BRDF Texture Occlusion As light travels in the scene, its frequencies are affected by variations in the incident illumination, the object’s curvature, shape of the BRDF lobe and texture frequencies, as well as occlusion

13 Our Contribution How to speed-up shading? By sparse sampling the image
How to combine the sparse samples? Multi-resolution rendering and bi-lateral up-sampling By adaptive sampling for integration So far, we realized we can speed-up shading by sparse sampling the image when we looked at a diffuse material. And, we can combine these sparse samples using multi-resolution rendering and bi-lateral up-sampling to get the final image. In addition, by looking at specular materials, we noticed we can adaptively sample the shading integral and we don’t need a fixed number of samples for every pixel we shade.

14 Our Contribution (Illustrated)
Sparse reconstruction samples Adaptive Integration samples , This is an illustration of what we just explained. We are going to compute shading only one a subset of the pixels and up-sample the results to spread the computations to neighboring pixels. Plus, for each pixel we shade, depending on the variance of the shading integrand, we will adapt the number of samples for integration (and that's the picture on the right) ∝ the local maximum variations in the image ∝ the average variance of the shading integrand Multi-resolution shading and bi-lateral up-sampling Final shaded image

15 Assumptions Direct reflection under distant illumination
No visibility and global illumination effects Allow dynamic editable geometry Screen-space rendering in the context of deferred-shading Any shading model is possible Save more if shading cost is higher e.g. measured materials BRDF importance sampling Normal In this work, we assume direct reflection under distant illumination. No visibility and global illumination effects are considered. We allow dynamic editable geometry. We do screen-space rendering in the context of deferred shading. Using any shading model is possible. Saving is more evident for more costly shaders. For our results we do BRDF importance sampling.

16 Dynamic Geometry Animation
Here is a video showing our technique supporting dynamic editable geometry. This is a model of a car door, painted with the red-metallic-paint BRDF from the MERL database. Notice we are interactively denting the car (which is a shame, actually, it's a nice car) and recomputing complex shading effects interactively.

17 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work Now, we are going to briefly examine the previous work.

18 Related Work GPU Rendering of Complex Materials
[Kautz&McCool1999] [Latta&Kolb2002] [McCool2001] [Kautz2002] [Heidrich&Seidel1999] This topic has been the subject of intensive research in the past. I won't be able to review all previous work here. First, there's a class of papers targeting rendering complex materials using graphics hardware. Early papers used pre-filtered environment maps, others used separable approximations and factorization. There is also spherical harmonics, spherical Gaussians, and wavelet encoding of BRDFs. None of these techniques account for the economy of samples in screen-space. [Heidrich&Seidel1999] (Realistic, Hardware-accelerated Shading and Lighting) -> simple brdfs with pre-filtered envmaps Separable approximations and factorization techniques using texture mapping to store BRDFs: [Kautz&McCool1999] (Interactive Rendering with Arbitrary BRDFs using Separable Approximations) -> factorizes arbitrary brdfs [McCool2001] (Homomorphic Factorization of BRDFs for High-Performance Rendering) -> factorizes arbitrary brdfs [Latta&Kolb2002] (Homomorphic Factorization of BRDF-based Lighting Computation) -> factorizes the full lighting computation [Ramamoorthi2002](Frequency Space Environment Map Rendering)-> pre-filters envmaps using spherical harmonics in the frequency domain. (isotropic brdfs, distant lighting) [Kautz2002](Fast, Arbitrary BRDF Shading for Low-Frequency Lighting Using Spherical Harmonics)-> uses a 2D table of spherical harmonics to compress brdf times cosine and allows arbitrary brdfs, either view or lighting must be fixed. Supports low frequency lighting only. Uses pre-computed-transfer to obtain self-shadowing and self-scattering effects. [Claustres2007](Wavelet Encoding of BRDFs for Real-Time Rendering)-> uses a wavelet decomposition of acquired brdfs. Performs material anti-aliasing simply by selecting the appropriate brdf wavelet resolution. Supports anisotropic brdfs . [Wang2009](All-Frequency Rendering of Dynamic, Spatially-Varying Reflectance)-> supports static scenes, spherical Gaussian approximation of spatially-varying brdfs, and all-frequency shadows from environmental and point lights. [Ramamoorthi2002] [Claustres2007] [Wang2009]

19 Related Work Multi-Resolution Screen-Space Algorithms
[Nichols&Wyman2009] [Shopf2009] [Nichols&Wyman2010] On the other hand, there are many multi-resolution screen-space techniques that use ad-hoc heuristics to render indirect illumination at varying levels of coarseness and then combining them using bi-lateral up-sampling. None of these techniques use a systematic way to decide which source of illumination contributes at what level of coarseness. These techniques work because indirect illumination is always smooth and low frequency. Multi-resolution shading blurs out the indirect contribution nicely. Multi-resolution screen-space algorithms => Render by heuristically shading pixels at varying levels of coarseness, then bilateral upsampling. They use adhoc heuristics to determine which pixel should be shaded at which level. [Nichols&Wyman2009](Multiresolution Splatting for Indirect Illumination)-> [Shopf2009](Hierarchical Image-Space Radiosity for Interactive Global Illumination)-> [Nichols&Wyman2010](Interactive Indirect Illumination Using Adaptive Multiresolution Splatting)-> [Soler2010](A Deferred Shading Algorithm for Real Time Indirect Illumination)-> [Nichols2010](Interactive, Multiresolution Image-Space Rendering for Dynamic Area Lighting)-> [Segovia2006](Interactive Indirect Illumination Using Adaptive Multiresolution Splatting)-> [Nichols2010] [Segovia2006] [Soler2010]

20 Related Work Pre-Computed Transport
[Sloan2002] [Ramamoorthi2009] [Liu2004] Finally, Pre-computing light transport algorithms allows fast rendering of global illumination effects and soft shadows. On the other hand, they are mostly limited to static geometry, and hardly support high frequency illumination and highly glossy materials. Pre-computed Transport => Early PRT works enables diffuse objects to be illuminated in low frequency lighting environments, and includes complex light transport effects like soft shadows, inter-reflections, caustics and subsurface scattering. [Sloan2002](Precomputed Radiance Transfer for Real-Time Rendering in Dynamic, Low-Frequency Lighting Environments)&[Ramamoorthi2009](Precomputation-Based Rendering)-> [Liu2004](All-Frequency Precomputed Radiance Transfer for Glossy Objects)-> [Loos2011](Modular Radiance Transfer)&[Loos2012](Delta Radiance Transfer)->approximates the direct to indirect transport and enables dynamic objects in the scene. [Sun2007](Interactive Relighting with Dynamic BRDFs)->introduce prec-omputed transfer tensors (PTTs) which allows dynamic source radiance, viewing direction, and brdfs. [Loos2011] [Loos2012] [Sun2007]

21 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work We begin to explain our technique, by expressing our goal.

22 Goal Sparse sampling for reconstruction
∝ maximum local variations (local bandwidth) Adaptive sampling for integration ∝ variance of the shading integrand Multi-resolution rendering and bilateral up-sampling. a single coarse-to-fine rendering pass Our goal is to sample the image sparsely, computing illumination at a small number of pixels. And simultaneously, for each image pixel were we compute shading to adapt the number of integration samples. And we would like to do that in a multi-resolution rendering setup.

23 Roadmap Related work Our technique … The goal
Sparse sampling for reconstruction Local light-field definition Spectral Operations 2D local bandwidth Image-space local bandwidth Adaptive sampling for integration Validation In order to satisfy our goal, we are going to compute information about the picture. Information that tells us which pixels can be sparsely sampled. Specifically, we are going to compute frequency information about a local light field. Let's define the objects we use.

24 Local Light-Field 4D local light-field illustrated in 2D Central ray
We consider a ray, going from the light source (that's the environment map), bouncing on the scene and going to the camera. We are interested in the frequency content of the local light field around this ray. We are going to parameterize this 4D local light field around the central ray. I'll show it in 2D for simplicity. We parameterize neighboring rays both in space and angle, relatively to the central ray. Central ray Space Angle

25 Roadmap Related work Our technique … The goal
Sparse sampling for reconstruction Local light-field definition Spectral Operations 2D local bandwidth Image-space local bandwidth Adaptive sampling for integration Validation Now let’s review the possible operations on the spectrum of the local light-field.

26 Spectral Operations Spectral operations [Durand et al. 2005]
The 4D light-field goes through a series of transformations or filters before it reaches the camera sensor. Transformations such as transport through free space, the effects of occlusion, curvature, BRDF and texture. Durand et al has mathematically defined these operations both in the primary space and in frequency space applicable on the local 4D light-field. [Durand et al. 2005]

27 Roadmap Related work Our technique … The goal
Sparse sampling for reconstruction Local light-field definition Spectral Operations 2D local bandwidth Image-space local bandwidth Adaptive sampling for integration Validation Now, we define 2D local bandwidth.

28 2D Local Bandwidth 2D Local Bandwidth
maximum local variations about a light path both in angle and space. Our problem is that the theoretical contribution by [Durand et al 2005] is not usable for interactive rendering applications. Mainly because it takes too long to compute. We need to simplify the representation of the spectrum for fast rendering. Instead of the whole spectrum of the local light-field, we are only going to compute the maximum local frequencies in the spectrum. So, we define 2D local bandwidth as the bounding box containing the 95% of the spectral energy representing the maximum local variations in the spectrum of the local light-field. All the operations defined for the 4D local light-field also hold for the local bandwidth. spatial angular 2D local bandwidth

29 Propagating Local Bandwidth
As light travels from the light source to the sensor, we keep track of the 2D local bandwidth about the light-path going through several transformations in frequency space.

30 Propagating Local Bandwidth
One-bounce 2D bandwidth propagation Transport (from light to reflector) Reflection (BRDF & Texture) Transport (from reflector to pixel) Matrix operators on 2D bandwidth We are interested in a simple single-bounce 2D bandwidth propagation, which includes transport from light to reflector, reflection off the reflector, and transport from reflector to sensor. We formulated these frequency-space transformations as simple matrix operations applicable on the 2D local bandwidth for real-time computation. Please refer to the paper for details. where Bandwidth at pixel p Reflection operator Illumination bandwidth Scale Curvature Mirror BRDF and texture Transport to image plane Transport to the surface

31 Grace cathedral [Debevec2001]
Input Signals Illumination environment map Reflectance Acquired isotropic BRDFs Textures Now, we have the bricks that tell us the effects of light transport on the frequency content of the local light field. What's missing is the input to the pipeline. Our inputs include: an environment map, BRDFs and textures, for which we need to pre-compute bandwidth. Grace cathedral [Debevec2001] Gold-paint [ Matusik2003]

32 Input Signals’ Bandwidth
Illumination environment map Purely angular bandwidth Reflectance Acquired isotropic BRDFs Textures Purely spatial bandwidth By definition, the bandwidth from the distant illumination and BRDFs are purely angular. And the bandwidth from the texture is purely spatial. Grace cathedral [Debevec2001] Gold-paint [ Matusik2003]

33 Computing Local Bandwidth
Discrete Wavelet Transform to compute bandwidth environment map and reflectance (BRDF & Texture) Discrete Wavelet Transform instead of WFFT wavelets are well localized both in space and frequency. We use discrete wavelet transform to compute local bandwidth of the input signals. We chose wavelets over Fourier transform simply because wavelets are localized both in space and frequency. Daubechies 4 tap wavelet FFT Space Frequency

34 Local Bandwidth Examples
These are examples of the local bandwidth computed from the input signal. Grace cathedral [Debevec2001] Angular bandwidth Gold-paint [ Matusik2003] Angular bandwidth

35 Roadmap Related work Our technique … The goal
Sparse sampling for reconstruction Local light-field definition Spectral Operations 2D local bandwidth Image-space local bandwidth Adaptive sampling for integration Validation How can we combine and convert the local bandwidths arriving at a pixel into image-space bandwidth?

36 Image-Space Bandwidth
We combine bandwidth from various incident directions to get the final bandwidth at a given pixel. Taking a max is too conservative, we take the weighted average instead. Bandwidth at each pixel What we have now is a pipeline that tells us for each incident direction (on the environment map) the frequency it creates at a given pixel. But we have to combine the frequency information between different incident directions, since a point receives light from several directions. The naive way to do this would be to take the maximum of all bandwidth contributions from incident directions. Which is what we did on the picture on the left. As you can see, it overestimates frequency content, to the point where it is not usable. We use a weighted average instead, giving more weight to samples that carry more energy as you can see in the center. And this gives us the image-space bandwidth at a given pixel, which is the angular component of the 2D local bandwidth arriving at the sensor. If you look at the “rendered image” on the right, the bright highlights imply high local bandwidth which is coming from a specific direction in the shading integral. In order to capture these kinds of effects, we need to combine bandwidth information from various incident directions to get the final bandwidth at a given pixel. The naive way to do it is to take a max between the bandwidth contributing from various incident directions. But since taking a max is far too conservative, we take the weighted average instead, to give more weight to the samples that carry more information (energy). Finally, the image-space bandwidth at a given pixel is due to the angular component of the 2D local bandwidth arriving at the sensor. Why are we using an average? In practice you cannot compute the 95 percentile of the linear combination of density functions because it involves the integral of the inverse of the function (it’s a non-linear thing) but in practice the average is always between the maximum and the minimum. It’s not going to be exactly the average of the bandwidth of the functions because of the inverse thing. It’s a very good approximation of the actual average. The bandwidth of the average of the functions is difficult to compute, instead we approximate it by the average of the bandwidth of the function. The bandwidth at the pixel is due to the angular bandwidth received at the sensor (pine-hole camera). When you look at the scene from the camera what you see as bandwidth is only angular. At the sensor we project on the angle because you integrate over space. Bandwidth estimate based on max Bandwidth estimate Rendered image

37 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work So far, we have the bandwidth at every pixel, and we use it for screen-space sampling. What is missing is how we sample the shading integral at every pixel. Adaptively.

38 Variance of the Shading Integrand
# shading samples ∝ sum of bandwidths, weighted by the illumination times reflectance squared. The sum is a conservative approximation of the actual variance of the shading integrand. We approximate the variance of the shading integrand (the function of which we are computing the integral). You have this expression that involves the angular bandwidth from the light source incident on the surface and the angular bandwidth of the reflectance, weighted by the square of illumination and reflectance. You can refer to the paper for the derivation. Remember that the variance of the integrand gives you the number of samples you need to approximate the integral. To conservatively approximate the variance of the shading integrand we drive an expression of the variance that involves the angular bandwidth from the light source incident on the surface and the angular bandwidth of the reflectance, weighted by the square of illumination and reflectance. Angular bandwidth arriving at the surface Angular bandwidth of the reflectance

39 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work Next, comes the validation of our estimations.

40 Validation of the Bandwidth and Variance
Estimated bandwidth Measured bandwidth (WFFT) Estimated variance Measured variance Here we validate our approximations to bandwidth and variance by comparing to the reference. On the left, bandwidth. First the bandwidth computed by our algorithm and next to it the bandwidth measured on the reference picture. On the right, variance, again with a comparison with reference. You can see that our algorithm does a good job of approximating the bandwidth and the variance, even though we're not computing the exact values. For measured bandwidth, the image is blurry because WFFT gives the bandwidth for the whole image and not for each pixel. At each pixel in the image, there are only a few wavelets participating. Average of the wavelet coefficients times the bandwidth of the wavelet is good estimate of the bandwidth.

41 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work Let’s see how our technique works in practice.

42 The Algorithm Step 1 Step 2 Step 3 Step 4 Compute g-buffers
Load Illumination and reflectance bandwidth Step 2 Estimate bandwidth and variance Step 3 Mip-map bandwidth and variance Step 4 Shade and up-sample Our technique starts like any classical deferred shading algorithm by rendering g-buffers. At the same time, we load the pre-computed bandwidth for illumination and reflectance. In the second step we estimate bandwidth and variance. The estimation is very fast. Just about 8ms. Next, we mip-map the estimated bandwidth and variance. Then we shade and up-sample according to the estimated bandwidth and variance in a multi-resolution buffer.

43 Video: algorithm Let’s see how our technique works in practice.

44 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work It’s time to see the results and comparison with the reference, ground truth and previous work.

45 Result (Final Image) 4x MSAA (1609 ms)
This is an example of a scene with 4 different materials and a marble texture on the floor. 4x MSAA (1609 ms)

46 Result (Bandwidth) The shaded pixels are shown in white. The black pixels are up-sampled. Notice how there are less pixels being shaded for the diffuse material on the right.

47 Result (Variance) This is the estimate of how many shading samples are needed for each pixel. This time, notice there are less shading samples for the specular material on the left.

48 Timings Fast bandwidth/variance estimation (8 ms) (ms) (ms)
This table shows timings for our various test scenes. The time gained by using bandwidth computations varies. But the real number I want you to pay attention to is here: the whole bandwidth computation is very fast, about 8 ms. Bandwidth and variance estimation is very fast, it’s about 8 ms. (ms)

49 Timings Computation time scales linearly with the total number of shading samples The computation time mainly depends on the total number of shading samples. Reducing the number of shading samples will reduce the total computation time.

50 Comparison with Reference
Similar quality (2639 ms) Our algorithm (1015 ms) Similar time (906 ms) We compared with reference images computed using fixed number of shading samples for every pixel, with similar quality and similar time. Our technique is 2.5 times faster to render than the reference image with similar quality.

51 Comparison with Spherical Gaussian Approximation (Wang et al. 2009)
Fast but different from ground-truth (notice that color changing effects are missing) Here, we compare our algorithm with fast shading by Wang et al. The two images in the first row are kindly provided by [Wang et al] . They are best-fit spherical Gaussian approximations to the measured BRDF. Their technique is very fast but very approximate. To accurately represent a material they need to fit many spherical Gaussians. (And for the color-changing-paint material case, they must fit spherical Gaussians to each channel separately to properly depict color channel dependent effects.) Our results at the bottom row are interactive but much closer to the ground-truth images in the middle row. Path-traced reference Interactive but more accurate

52 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work We introduce an extension to our algorithm which is multi-sample anti-aliasing.

53 Extension Adaptive Multi-Sample Anti-Aliasing
Deferred shading algorithms are naturally not compatible with anti-aliasing using several samples per pixel. Basically, using several samples per pixel just increases the computation time as all the samples are rendered and then down-sampled. So using 4 samples per pixels takes 4 times more time to render. One great advantage of our technique is that we can perform anti-aliasing with only sub-linear performance decrease. We only compute shading at a few pixels at the finer levels according to the predicted bandwidth. What you can see here is at which hierarchical level the pixels are computed.

54 Video: adaptive MSAA Here is a video of our adaptive multi-sample anti-aliasing extension. If you focus on the basis of the statue at the junction between the statue and the basis, you can see aliasing issues, and our computing several samples per pixel removes this aliasing.

55 Timings Sub-linear performance for MSAA
Here you see the performances of our algorithm using several samples per pixel for anti-aliasing, using log-log scale. The performance for multi-sample anti-aliasing decreases sub-linearly with the number of MSAA samples.

56 Roadmap Related work Our technique The rendering algorithm
The goal Sparse sampling for reconstruction Adaptive sampling for integration Validation The rendering algorithm Results and comparisons Extension: MSAA Conclusion / Future work And finally conclusion and future work.

57 Limitations Over-estimated Bandwidth Variance Up-sampling artefacts

58 Conclusion Interactively shading dynamic geometry with acquired materials Shade a fraction of pixels using bandwidth Adaptively sample shading integrals Combine shaded pixels using up-sampling Exploiting bandwidth to sub-linearly scale MSAA with deferred shading We shade a fraction of pixels based on image-space bandwidth. We adapt the number of samples for the shading integral based on variance. We combine shaded pixels using up-sampling. We also extended the algorithm to support MSAA.

59 Future Work Local light sources Better up-sampling algorithm
Depth-of-field 6D bandwidth computation for SVBRDFs Visibility and indirect illumination We would like to extend this work to support local light sources. The only difference would be that local light sources are not purely angular anymore and we need to account for spatial bandwidth of the local light source as well. Currently, up-sampling using depth and materials discontinuities and anisotropic gaussian kernel; Depth-of-field is easy to implement. We just need to account for the relevant operations on the bandwidth arriving at the sensor. ??? To some extent we handle SVBRDF, because we handle textures. But we assume no correlation between the spatial and angular dimensions in the BRDF. The difficulty with real SVBRDFs is large memory footprint for the 6D bandwidth as well as the SVBRDF itself. We can compress the 6D bandwidth, using PCA for example, and use wavelet decomposition or spherical Gaussians to approximate and fit the SVBRDF on GPU’s memory. For visibility, for each frame we need to sample the visibility for each incoming ray which is obviously very costly and the result won’t be interactive anymore.

60 Thank you Thank you. Interactive rendering of acquired materials on dynamic geometry using bandwidth prediction


Download ppt "Thank you for the introduction"

Similar presentations


Ads by Google