Sampling, Aliasing
Examples of Aliasing
What is a Pixel? A pixel is not: a box a disk a tiny little light A pixel “looks different” on different display devices A pixel is a sample it has no dimension it occupies no area it cannot be seen it has a coordinate it has a value
Philosophical perspective The physical world is continuous, inside the computer things need to be discrete Lots of computer graphics is about translating continuous problems into discrete solutions e.g. ODEs for physically-based animation, global illumination, meshes to represent smooth surfaces, antialiasing Careful mathematical understanding helps do the right thing
More on Samples The process of mapping a continuous function to a discrete one is called sampling The process of mapping a continuous variable to a discrete one is called quantization To represent or render an image using a computer, we must both sample and quantize Focus on the effects of sampling and how to overcome them discrete value discrete position
Sampling & reconstruction The visual array of light is a continuous function 1/ we sample it with a digital camera, or with a ray tracer This gives us a finite set of numbers, not really something we can see We are now inside the discrete computer world 2/ we need to get this back to the physical world: we reconstruct a continuous function for example, the point spread of a pixel on a CRT or LCD Both steps can create problems But we’ll focus on the first one (you are not display manufacturers)
Examples of Aliasing
Examples of Aliasing
Examples of Aliasing Texture Errors point sampling
Sampling Density If we’re lucky, sampling density is enough Input Reconstructed
Sampling Density If we insufficiently sample the signal, it may be mistaken for something simpler during reconstruction (that's aliasing!)
Aliasing insufficient sampling (sampling rate is too small) Aliasing: a high-frequency signal masquerading as a low frequency Actual (high-frequency) signal Sampling Interval Sampled (aliased) signal
Discussion Two types of aliasing Edges Repetitive textures More tricky Harder to solve
Solution? How do we avoid the high-frequency patterns distort our image? We blur! For ray tracing: compute at higher resolution, blur, resample at lower resolution For textures, we can also blur the texture image before doing the lookup
Aliasing and Line Drawing We draw lines by sampling at intervals of one pixel and drawing the closest pixels Results in stair-stepping Sampling Interval Sampling Interval
Anti-aliasing Lines Idea: Make line thicker Fade line out (removes high frequencies) Now sample the line
Anti-aliasing Lines Solution – Unweighted Area Sampling: Treat line as a single-pixel wide rectangle Colour pixels according to the percentage of each pixel covered by the rectangle.
Solution : Unweighted Area Sampling Pixel area is unit square Constant weighting function Pixel colour is determined by computing the amount of the pixel covered by the line, then shading accordingly Easy to compute, gives reasonable results One Pixel Line
Solution : Weighted Area Sampling Treat pixel area as a circle with a radius of one pixel Use a radially symmetric weighting function (e.g., cone): Areas closer to the pixel centre are weighted more heavily Better results than unweighted, slightly higher cost One Pixel Line
Re-hashing Your intuitive solution is to compute multiple colour values per pixel and average them A better interpretation of the same idea is that You first create a higher resolution image You blur it (average) You resample it at a lower resolution
Sampling Theorem When sampling a signal at discrete intervals, the sampling frequency must be greater than twice the highest frequency of the input signal in order to be able to reconstruct the original perfectly from the sampled version (Shannon, Nyquist, Whittaker, Kotelnikov)
Filters (convolution kernel) Weighting function Area of influence often bigger than "pixel" Sum of weights = 1 Each sample contributes the same total to image Constant brightness as object moves across the screen. No negative weights
Filters Filters are used to reconstruct a continuous signal from a sampled signal (reconstruction filters) band-limit continuous signals to avoid aliasing during sampling (low-pass filters) Desired frequency domain properties are the same for both types of filters Often, the same filters are used as reconstruction and low-pass filters
Pre-Filtering Filter continuous primitives Treat a pixel as an area Compute weighted amount of object overlap What weighting function should we use?
Post-Filtering Filter samples Compute the weighted average of many samples Regular or jittered sampling (better)
The Ideal Filter Unfortunately it has infinite spatial extent Every sample contributes to every interpolated point Expensive/impossible to compute spatial frequency
Problems with Practical Filters Many visible artifacts in re-sampled images are caused by poor reconstruction filters Excessive pass-band attenuation results in blurry images Excessive high-frequency leakage causes "ringing" and can accentuate the sampling grid (anisotropy) frequency
Gaussian Filter This is what a CRT does for free! spatial frequency
Box Filter / Nearest Neighbour Pretending pixels are little squares. spatial frequency
Tent Filter / Bi-Linear Interpolation Simple to implement Reasonably smooth spatial frequency
Bi-Cubic Interpolation Begins to approximate the ideal spatial filter, the sinc function spatial frequency
Ideal sampling/reconstruction Pre-filter with a perfect low-pass filter Box in frequency Sinc in time Sample at Nyquist limit Twice the frequency cutoff Reconstruct with perfect filter Box in frequency, sinc in time And everything is great!
Difficulties with perfect sampling Hard to prefilter Perfect filter has infinite support Fourier analysis assumes infinite signal and complete knowledge Not enough focus on local effects And negative lobes Emphasizes the two problems above Negative light is bad Ringing artifacts if prefiltering or supports are not perfect
Supersampling in graphics Pre-filtering is hard Requires analytical visibility Then difficult to integrate analytically with filter Possible for lines, or if visibility is ignored usually, fall back to supersampling
Solution: Super-sampling Divide pixel up into “sub-pixels”: 22, 33, 44, etc. Sub-pixel is coloured if inside line Pixel colour is the average of its sub-pixel colours Easy to implement (in software and hardware) No anti-aliasing Anti-aliasing (22 super-sampling)
Many Types of Supersampling Grid Random Jittered Poisson Disc
Polygon Anti-aliasing To anti-alias a line, we treat it as a rectangle Anti-aliasing a polygon is similar. Some concerns: Micro-polygons: smaller than a pixel Super-sampling: There may still be polygons that “slip between the cracks”
Uniform supersampling Compute image at resolution k*width, k*height Downsample using low-pass filter (e.g. Gaussian, sinc, bicubic)
Uniform supersampling Advantage: The first (super)sampling captures more high frequencies that are not aliased Issues Frequencies above the (super)sampling limit are still aliased Works well for edges Not as well for repetitive textures
Jittering Uniform sample + random perturbation Sampling is now non-uniform Signal processing gets more complex In practice, adds noise to image But noise is better than aliasing Moiré patterns
Jittered supersampling Regular, Jittered Supersampling
Jittering Displaced by a vector a fraction of the size of the subpixel distance Low-frequency Moire (aliasing) pattern replaced by noise Extremely effective Patented by Pixar!
Reconsider Uniform supersampling Problem: supersampling only pushes the problem further: The signal is still not bandlimited Aliasing still happens We need a better solution for high frequency sections of the image
Adaptive sampling Adjust your sampling rate and/or bucket size on-the-fly, depending on results of previous samples attempt to concentrate more samples in high frequency areas (e.g., edges of objects) and fewer samples in low frequency areas (constant regions) get an approximate view of the function by first sampling with a low sampling rate and/or large bucket size over the entire domain then use that approximation to estimate where more samples should be taken by focusing on regions of high variance generally a recursive process Much better than naïve supersampling
Adaptive supersampling Use more sub-pixel samples around edges
Adaptive supersampling Use more sub-pixel samples around edges Compute colour at small number of sample If variance with neighbour pixels is high Compute larger number of samples
Adaptive supersampling Use more sub-pixel samples around edges Compute colour at small number of sample If variance with neighbour pixels is high Compute larger number of samples