Download presentation
Presentation is loading. Please wait.
Published byAllan Davidson Modified over 8 years ago
1
01/26/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities
2
01/26/05© 2005 University of Wisconsin Today Improving Efficiency with Monte Carlo Integration
3
01/26/05© 2005 University of Wisconsin Monte Carlo Efficiency We can get an estimate faster (do less work per sample) Or we can get an estimate with lower variance as a function of N Either improves the efficiency of an estimator F:
4
01/26/05© 2005 University of Wisconsin Ways to Improve Efficiency (PBR Chap. 15) Less work per sample: –Russian Roulette –Splitting Careful sample placement: –Stratified Sampling –Low-discrepancy sequences –Best-Candidate samplers Introducing Bias Importance Sampling
5
01/26/05© 2005 University of Wisconsin Russian Roulette (PBR 15.1) Say the integrand, f(x), is expensive to compute –It may require tracing rays and evaluating reflectance functions, or it may even require an infinite amount of work For each sample, choose some value q Sample If <q, then use a constant c instead of f(x) Otherwise, evaluate the integrand, but weight it Why does it work …
6
01/26/05© 2005 University of Wisconsin Roulette Math Never decreases variance But can reduce time without increasing variance if only samples with low F are terminated
7
01/26/05© 2005 University of Wisconsin Choosing q and c Try to base q on the anticipated value of F Integrating the direct contribution of lights requires a shadow ray test If the light is far away or the test ray will hit a low- contribution part of the light, then q should be low The contribution of reflection rays goes down as the ray-tree gets deeper Base q on the depth of the ray tree (the number of reflections so far) Choose c=0 in these cases
8
01/26/05© 2005 University of Wisconsin Splitting (PBR 15.1.1) Say you need to compute a multi-dimensional integral –Such as the integral over the area seen by a pixel and the directions to an area light source Say you expect the integral to vary more slowly in one dimension than the other –Incoming illumination is going to vary more rapidly, due to occluders, than over positions within the pixel Choose a value for the slow varying component, and many values for the fast varying component –One ray through the pixel to find the surface point, then many rays to the light
9
01/26/05© 2005 University of Wisconsin Effects of Splitting Reduces time with little increase in variance The example on the previous slide is extremely common Extreme contrived case: Less contrived case:
10
01/26/05© 2005 University of Wisconsin Stratified Sampling (PBR Sect 7.3 and 15.2.1) Consider uniformly sampling inside a square Truly at random (choose random x, random y) will give clumps of points –Uniformly distributed talks about probability, but not the intuitive definition of uniform (like evenly) Instead, break domain into little regions, or strata, and put one sample in each piece –Choose uniformly at random in each strata – jittered sampling
11
01/26/05© 2005 University of Wisconsin Stratified Example
12
01/26/05© 2005 University of Wisconsin Effect on Images
13
01/26/05© 2005 University of Wisconsin Stratification Comments Stratification reduces variance But what if number of samples required is not a product of two reasonable numbers – N=N x N y ? What do you do in multiple dimensions? –The curse of dimensionality gets to you It isn’t always great
14
01/26/05© 2005 University of Wisconsin Latin Hypercube Sampling Say you want 5 samples in the square Use a 5x5 grid, and place samples in squares s.t. only one sample per row and only one sample per column –Can be done by permuting of rows or columns of identity Performance degrades for increasing numbers of samples –Can get large empty areas
15
01/26/05© 2005 University of Wisconsin Stratifying Multiple Dimensions Do not attempt to put a sample in every possible multi- dimensional strata For each dimension, stratify independently For multi-dimensional samples, permute strata, then choose 1 st value from every sequence, 2 nd value, etc
16
01/26/05© 2005 University of Wisconsin Stratification can Fail Can get unlucky, typically by clumping in one dimension
17
01/26/05© 2005 University of Wisconsin Low-Discrepancy Samplers Deterministic sequences that look random in the more natural sense – noisy –Also guarantees on arrangement Generation beyond scope of class Quasi Monte Carlo: Instead of random samples, use low discrepancy sequences
18
01/26/05© 2005 University of Wisconsin Best Candidate Sampling Poisson distribution: Uniform distribution with condition that no two points are closer than a minimum distance –Excellent distribution to use, but hard to generate Best-Candidate patterns approximate Poisson distributions New methods along these lines are always being invented
19
01/26/05© 2005 University of Wisconsin Biased Samplers A sampler is biased if the expected value is not the correct one –Bias is the difference between estimate and desired value Biased estimators for uniform[0,1], with =-0.5/N+1:
20
01/26/05© 2005 University of Wisconsin Bias can be Good Even with low sample counts, bias can be good –Variance can be lower, e.g. O(N -2 ) Most image reconstruction filters give biased estimates –Reduces variance and hence apparent noise in image Standard photon map estimator (later) is biased –But result is less noisy
21
01/26/05© 2005 University of Wisconsin Importance Sampling The function p is called the importance function A wise choice of p, as close as possible to f, can dramatically reduce variance A poor choice can dramatically increase variance Choosing importance functions is a well established art in physically based rendering
22
01/26/05© 2005 University of Wisconsin Important Function Generalities If you are integrating something like f(x)g(x), it can be helpful to choose p=f or p=g Multiple Importance Sampling lets you combine results from many importance samplers –Generate some samples according to p –Generate some according to q –Form a weighted sum – details in PBR – good when no one importance function handles all cases
23
01/26/05© 2005 University of Wisconsin Next Time Cameras and Film
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.