Download presentation
Presentation is loading. Please wait.
1
Scalable Illumination for Complex Scenes
Kavita Bala Computer Science Department Cornell University Bruce Walter Adam Arbree, Milos Hasan, Ganesh Ramanarayanan My name is Kavita Bala. Thank you for attending my talk. I am going to present my work on scalable rendering for complex scenes.
2
Applications: High-Quality Graphics
Entertainment: games (MMP), movies Training/simulation: search-and-rescue, surgery, lighting design Telecollaboration: virtual and augmented reality, e-commerce Black and White 2 Kalabsha temple, Bala There are several applications for high-quality graphics. There are obviously the multi-billion dollar industries of entertainment which includes games and movies. I should mention specifically Massively Multi player games as a special area where people spend their entire day, and develop virtual economies that result in real economic exchanges. The often overlooked applications of training and simulation applications that will matter more, particularly if we throw out the hacks and do principled image generation. And telecollaboration apps such as VR and AR and ecommerce applications benefit (what you see is what you get!). On the left you see the state-of-the-art quality of existing applications. On the right you see what we would like to achieve. Kavita Bala, Cornell University
3
Kavita Bala, Cornell University
Motivation Scene complexity keeps growing Complex geometry Complex materials, textures Complex illumination Complex light transport Digital Michelangelo The motivation of my work is that the complexity of scenes we want to interact with keep growing in computer graphics. We have scanners producing increasingly complex geometry like the Digital Michelangelo project. We have image-based acquisition techniques producing complex materials and textures. And also complex illumination like these HDR images shown in the middle to light scenes. Additionally, light transport in these scenes include complex volumetric effects and shading. Kavita Bala, Cornell University Debevec PCG
4
Kavita Bala, Cornell University
Tomorrow’s scenes Hardware pipeline alone will not scale Scenes violate assumptions of graphics hardware Many polygons per pixel Non-local data access The traditional graphics pipeline will not scale to these scenes because they violate the fundamental assumptions of graphics hardware: they have many polygons per pixel, (for example see this zoom-in of the eye in David’s head on the bottom right) access rendering data in a memory incoherent manner (which does not work well with graphics hardware). Kavita Bala, Cornell University
5
Kavita Bala, Cornell University
How far off are we? We want… Interactive: 60 frames per sec Realism: shadows, fog, fire, haze, indirect illumination (laws of physics … could be artist-mediated) On-the-fly content generation Robust, automatic Off by several orders of magnitude Our goals are clear: We would like interactive performance. Realism which includes things like shadows fog fire haze and indirect illumination. What is indirect illumination. It is our words for the laws of physics. Note that we want this to be artist mediated. Not many of us have interacted with ogres, yet the image on the right was generated by PDI including 1-bounce indirect where the canopy is reflecting light onto Shrek’s head. Remember I talked about massively multi player games. They want on-the-fly content generation. We need algorithms that work out of the box and don’t rely on artists being paid minimum wage to paint over errors. We are currently several orders of magnitude off from where we want to be. PDI Kavita Bala, Cornell University
6
Kavita Bala, Cornell University
And, moving target, … Model complexity grows to match computing Toy Story 2 is twice as complex as Bug’s Life, which was 10 times more complex that Toy Story. – Ed Catmull, Pixar’s CTO. Toy Story: 2-13 hours render time/frame Bug’s Life: 3-4 hours render time/ frame Toy Story 2 : 10 min-3days render time/frame Monster’s Inc: Render time more than Toy Story, Toy Story 2, Bug’s Life combined! Without global illumination! And “cheated like crazy” With GI (1-bounce): slower Furthermore, we have a moving target. Moore’s law will still be several orders of magnitude too slow. And we would have to wait 25 years! This assumes Moore’s law holds Also, this is a moving target… Scene complexity keeps growing to match computing. As seen by the movie industry. This is without GI. With GI even slower (at least 4-6 orders of magnitude slow with only 1-bounce GI). And remember they cheat like crazy. Kavita Bala, Cornell University
7
Kavita Bala, Cornell University
Goal Scalable graphics Match visual complexity, not input complexity Insight: Capture visually important features Enable high quality and scalable performance Lightcuts for rendering Feature-based textures The goal of my research is to achieve scalable graphics that can handle these complex scenes by doing work proportional to the visual complexity (and not the representational complexity of scenes). We propose feature-based graphics that captures the visually important features in scenes. By finding what is visually important we are able to achieve both high quality and high performance. Our research applies in input representations, rendering and display. Kavita Bala, Cornell University
8
Kavita Bala, Cornell University
Projects Feature-based textures Lightcuts Edge-and-point rendering Cinematic relighting Kavita Bala, Cornell University
9
Background: Ray Tracing
color bleeding image glossy reflection refraction eye soft shadow Scalable for geometric complexity [ReshetovSoupikovHurley05] Not scalable for lighting complexity We are trying to speed up high quality software renderers. Ray tracers are the rendering technique of choice because they produce high-quality accurate images. Ray tracers capture effects such as glossy reflections, refractions, soft shadows due to area light sources, and color bleeding where light reflects off the green wall onto the ceiling. Ray tracers compute each image by shooting at least (?) one ray through every pixel to find the closest visible object. For example let us consider a single pixel (shown in cyan) on the right in the image plane. The camera or user’s eye position is also shown on the right. A ray tracer finds the closest visible along the pixel. In this case the glossy cylinder. To determine the color to be assigned to that pixel, the ray tracer computes the light energy that can arrive through that point to the eye by shooting many more rays in the scene. One example of such rays is shown here. So computing global illumination can be very expensive Kavita Bala, Cornell University
10
Kavita Bala, Cornell University
Problem definition What is shading at a point? Point = Kavita Bala, Cornell University
11
Effects: Antialiasing
Integral over pixel area Pixel = Kavita Bala, Cornell University
12
Kavita Bala, Cornell University
Effects: Fog Integral over volume Pixel = Kavita Bala, Cornell University
13
Kavita Bala, Cornell University
Effects: Motion Blur Integral over time Pixel = Kavita Bala, Cornell University
14
Effects: Depth of Field
Integral over camera aperture Pixel = Kavita Bala, Cornell University
15
Kavita Bala, Cornell University
Effects Solve complete pixel integral Pixel = Kavita Bala, Cornell University
16
Kavita Bala, Cornell University
First, shading points… What is shading at a point? Point = Kavita Bala, Cornell University
17
Kavita Bala, Cornell University
Lightcuts Scalable solution for many point lights Efficient, accurate complex illumination What is lightcuts? Lightcuts is a new scalable solution for efficiently computing complex illumination. Two examples are shown here. The image on the left is lit by an environment map with illumination captured from the real world, while the image on the right includes two simulated high dynamic range flat-panel displays. Both are challenging illumination problems. In addition both images include indirect illumination, glossy materials, and anti-aliasing. Yet our method was able to produce each image in under two minutes on a single machine. Environment map lighting & indirect Time 111s Textured area lights & indirect Time 98s (640x480, Anti-aliased, Glossy materials) Kavita Bala, Cornell University
18
Kavita Bala, Cornell University
Lightcuts Scalable solution for many point lights Thousands to millions Sub-linear cost The core of lightcuts is a new scalable algorithm for efficiently approximating the light from many point lights. By many I mean thousands to millions. Here we show the time to compute this tableau scene with environment map illumination using varying numbers of lights. Evaluating each light individually gives a cost that increases linearly with the number of lights. Using Ward’s technique we can avoid shadow to some of the weaker lights, but the cost is still basically linear. However lightcuts cost is strongly sub-linear. Thus its advantage over previous techniques grows dramatically as the number of lights increases. Tableau Scene Kavita Bala, Cornell University
19
Kavita Bala, Cornell University
Complex Lighting Simulate complex illumination using point lights Area lights Captured illumination (env maps) Sun & sky light Indirect illumination Unifies illumination Enables tradeoffs between components Once we have a scalable solution for many point lights, we can use this to compute other types of complex illumination. The four examples we demonstrate in the paper are area lights, high dynamic range environment maps, sun & sky light, and indirect illumination. Moreover this allows us to unify the handling of different illumination types and enables new types of tradeoffs. For example, bright illumination from one source allows coarser approximations of other sources. Area lights + Sun/sky + Indirect Kavita Bala, Cornell University
20
Kavita Bala, Cornell University
Key Concepts Light Cluster Approximate many lights by a single brighter light (the representative light) There are a few key concepts we need to understand the lightcuts approach. First is a light cluster where we approximate a group of lights by replacing them by a single brighter light called the representative light. Kavita Bala, Cornell University
21
Kavita Bala, Cornell University
Key Concepts Light Cluster Light Tree Binary tree of lights and clusters Clusters Second is the light tree which is a binary tree of lights and clusters. The leaves of this tree are the individual lights while the interior nodes represent clusters which get progressively larger as we go up the tree. Individual Lights Kavita Bala, Cornell University
22
Kavita Bala, Cornell University
Key Concepts Light Cluster Light Tree A Cut A set of nodes that partitions the lights into clusters The third is a cut which is a set of nodes that partitions the lights into clusters. Here illustrated by the orange line. Kavita Bala, Cornell University
23
Kavita Bala, Cornell University
Simple Example Light Tree Representative 4 #1 #2 #3 #4 Light Clusters 1 4 Individual 1 2 3 4 Lights Here is a simple scene with four lights and its corresponding light tree. Kavita Bala, Cornell University
24
Kavita Bala, Cornell University
Three Example Cuts Three Cuts #1 #2 #4 #1 #3 #4 #1 #4 4 4 4 1 4 1 4 1 4 And here we show three example cuts through the light tree. Highlighted above each cut are the regions where that cut produces an accurate approximation of the exact illumination. 1 2 3 4 1 2 3 4 1 2 3 4 Kavita Bala, Cornell University
25
Kavita Bala, Cornell University
Three Example Cuts Three Cuts #1 #2 #4 #1 #3 #4 #1 #4 4 4 4 1 4 1 4 1 4 If we look at this green point on the left of the images, the orange cut produces a good approximation while the blue and purple cuts would cause too much error. 1 2 3 4 1 2 3 4 1 2 3 4 Good Bad Bad Kavita Bala, Cornell University
26
Kavita Bala, Cornell University
Three Example Cuts Three Cuts #1 #2 #4 #1 #3 #4 #1 #4 4 4 4 1 4 1 4 1 4 Conversely for this point in the center of the images, the blue cut is a good approximation while the orange and purple cuts are not usable. This illustrates an important point. We will want to use different cuts in different parts of the image. 1 2 3 4 1 2 3 4 1 2 3 4 Bad Good Bad Kavita Bala, Cornell University
27
Kavita Bala, Cornell University
Three Example Cuts Three Cuts #1 #2 #4 #1 #3 #4 #1 #4 4 4 4 1 4 1 4 1 4 For this point on the right side of the images, all three cuts usable. In this case the purple cut is the best choice because it will be the cheapest to compute as it contains the fewest nodes. 1 2 3 4 1 2 3 4 1 2 3 4 Good Good Good Kavita Bala, Cornell University
28
Kavita Bala, Cornell University
Algorithm Overview For each eye ray Choose a cut to approximate the local illumination Cost vs. accuracy Avoid visible transition artifacts Ensure each cluster’s error < visibility threshold Weber’s law: 2% threshold Transitions will not be visible Then we build the light tree using a simple greedy approach. The more difficult problem is choosing the cuts. There is a cost vs accuracy tradeoff of cuts higher in the tree are cheaper while cuts lower the in tree are more accurate. We also need to avoid transition artifacts. Since we will use different cuts in different parts to the image, there will be transitions between places where we use a cluster and places where we refine it, and we don’t want these to produce visible artifacts. We will actually use this to drive the cut selection. Kavita Bala, Cornell University
29
Illumination Equation
S point = ki Li lights The contribution from a point light can be written as the product of four terms and then summed over all the lights to get the total result. The first term is a material term that depends on the material properties of the visible surface. Our initial implementation supports diffuse, phong and ward materials. Kavita Bala, Cornell University
30
Cluster Approximation
result ~ kj Ii S lights ~ j is the representative light Cluster When we approximate a cluster we use the material, geometric, and visibility terms from the representative light for all the lights. This is much cheaper but introduces some error. Kavita Bala, Cornell University
31
Kavita Bala, Cornell University
Cluster Error Bound error < kub Ii S - lights Cluster We can bound the approximation error by computing upper bounds for each of the terms over the cluster. Visibility is always less than or equal to one, so we use one as its trivial bound. Intensities are known ahead of time. The challenging part is to cheaply and tightly bound the material and geometric terms. We have come up with such bounds based on the bounding volume of the cluster and these are described in the paper. Kavita Bala, Cornell University
32
Cut Selection Algorithm
Start with coarse cut (eg, root node) Cut Once we can bound cluster error, we can use this to select cuts. We start with a coarse cut such as the root node of the tree. Kavita Bala, Cornell University
33
Cut Selection Algorithm
Select cluster with largest error bound Cut Then we select the node with the largest error bound. (the root node in this case). Kavita Bala, Cornell University
34
Cut Selection Algorithm
Refine if error bound > 2% of total Cut And refine this node if its error is greater than 2% of the estimated total. Refining means removing a node from the cut and replacing it with its children. Kavita Bala, Cornell University
35
Cut Selection Algorithm
Then we again select the cut node with the largest error Kavita Bala, Cornell University
36
Cut Selection Algorithm
And refine if its error is above threshold Kavita Bala, Cornell University
37
Cut Selection Algorithm
And repeat again Kavita Bala, Cornell University
38
Cut Selection Algorithm
Repeat until cut obeys 2% threshold Cut Until each node on the cut obeys our 2% threshold. Kavita Bala, Cornell University
39
Kitchen, 388K polygons, 4608 lights (72 area sources)
Lightcuts (128s) Reference (1096s) Error Error x16 Here is a result computed using this method. This kitchen scene contains 72 area lights each approximated by 64 point lights. The lightcuts image on the left took 128 seconds while the reference image on the right that evaluated all the lights took over 1000 seconds. However even close inspection reveals no visible difference between them. We can compute a error image on the left which is very dark and then magnify the differences by a factor of 16 to see the types of error introduced. What we see are lots discontinuity lines which are the transitions between using a particular cluster and refining it. Kitchen, 388K polygons, 4608 lights (72 area sources)
40
Combined Illumination
Lightcuts 128s 4 608 Lights (Area lights only) Lightcuts 290s Lights (Area + Sun/sky + Indirect) We can also use lightcuts to compute much more complex illumination. On the right we have added both sun/sky and indirect illumination to create a much richer image. Although we have increased the number of point lights by over a factor of 10, the total time increased by only a factor of just over 2. The number of shadow rays also increased by about a factor of 2. However image on left used an average of 259 shadow rays to the area lights, the image on the right used on 54. Our system automatically uses the presence of the other illumination to allow a coarser approximation of the area lights. Avg. 259 shadow rays / pixel Avg. 478 shadow rays / pixel (only 54 to area lights) Kavita Bala, Cornell University
41
Temple, 2.1M polygons, 505 064 lights, (Sun/sky+Indirect)
Here is a result of combining lightcuts and reconstruction cuts for an Eqyptian temple model lit by sun/sky and indirection illumination. Temple, 2.1M polygons, lights, (Sun/sky+Indirect) 189 seconds, Anti-aliasing 5.5 rays/pixel Avg. shadow rays per eye ray 9.4 (0.001%)
42
Grand Central, 1.46M polygons, 143 464 lights, (Area+Sun/sky+Indirect)
Here is another results. This is a model of Grand Central Termal and contain 880 interior lights plus sun/sky and indirect illumination. Using our techniques, it required an average of only 46 shadow rays per eye ray which represents just 0.03% of the lights. Grand Central, 1.46M polygons, lights, (Area+Sun/sky+Indirect) Avg. shadow rays per eye ray 46 (0.03%)
43
Bigscreen, 628K polygons, 639 528 lights, (Area+Indirect)
This last scene is particularly challenging because it contains two textured area lights simulating two high dynamic range monitors. We were able to easily handle this complex illumination by converting each pixel in the monitors into point lights and then applying our scalable algorithm. And here is a short animation showing two HDR videos playing on the monitors. Bigscreen, 628K polygons, lights, (Area+Indirect) Avg. shadow rays per eye ray 17 (0.003%)
44
But, want pixel, not points…
Solve complete pixel integral Pixel = Kavita Bala, Cornell University
45
Multidimensional Lightcuts
Complex illumination, complex effects Kavita Bala, Cornell University
46
Multidimensional Lightcuts
Scalable solution for complex effects GI, antialiasing, motion blur, depth of field, participating media Builds on Lightcuts New dimensions Anti-aliasing Reflection/refraction Motion blur Depth of field Participating media Roulette Scene Lights: (captured env map indirect) Motion blur: 256 sample/pixel
47
Kavita Bala, Cornell University
Pixels, not Points Solve complete pixel integral End-to-end principle Discretize full integral into 2 point sets Light points Gather points (eye rays) Distributed in time, over volume, camera aperture Gather points Kavita Bala, Cornell University
48
Equation But, set of all (gather, light) pairs is huge! pixel = Lji
( j,i) Î GxL But, set of all (gather, light) pairs is huge! Many millions per pixel
49
Kavita Bala, Cornell University
Product Graph Build implicit hierarchy (product graph) Light tree X Gather tree Gather points Kavita Bala, Cornell University
50
Kavita Bala, Cornell University
Product Graph L6 L4 L5 L1 L2 L3 L4 L5 L6 L0 G1 G0 G2 L0 L1 L2 L3 = Light tree X Gather tree G2 G0 G1 Kavita Bala, Cornell University
51
Kavita Bala, Cornell University
Clusters Kavita Bala, Cornell University
52
Kavita Bala, Cornell University
Cut Refinement Light tree L0 L1 L2 L3 L4 L5 L6 Gather tree G1 G0 G2 L0 L4 L1 L6 L2 L5 L3 G0 G2 G1 Kavita Bala, Cornell University
53
180 Gather points X 13,000 Lights = 234,000 Pairs per pixel
Avg cut size 447 (0.19%)
54
7,047,430 Pairs per pixel Avg cut size 174
Time 590 secs
55
Kavita Bala, Cornell University
Scalability Kavita Bala, Cornell University
56
Kavita Bala, Cornell University
Scalability Kavita Bala, Cornell University
57
5,518,900 Pairs per pixel Avg cut size 936
Time 705 secs
58
114,149,280 Pairs per pixel Avg cut size 821
Time 1740 secs
59
Kavita Bala, Cornell University
Goal Scalable graphics Match visual complexity, not input complexity Insight: Capture visually important features Enable high quality and scalable performance Lightcuts for rendering Feature-based textures The goal of my research is to achieve scalable graphics that can handle these complex scenes by doing work proportional to the visual complexity (and not the representational complexity of scenes). We propose feature-based graphics that captures the visually important features in scenes. By finding what is visually important we are able to achieve both high quality and high performance. Our research applies in input representations, rendering and display. Kavita Bala, Cornell University
60
Feature-Based Textures
Textures have fixed resolution Goal: resolution independent texturing Improving texture quality when zooming in Applications: games, interactive walkthroughs The motivation for our work is that standard texture maps have fixed resolution. This means that when a user of an interactive application zooms in on a texture, rather than being rewarded with increased quality, he sees blurry reconstruction artifacts, as you can see on the right with the zoomed in teapot. Our goal is to achieve resolution independent texturing so that we can produce high quality output even with arbitrary zooming. This COULD HAVE a large impact on interactive applications such as games. Kavita Bala, Cornell University
61
Feature-Based Textures (FBTs)
Humans are sensitive to features: boundaries in the texture with sharp contrast Store features as resolution-independent lines/ curves Interpolate from reachable samples: those on the same side of all features Our solution to the problem of texture resolution limitations is called “Feature-Based Textures” (FBTs for short). We exploit the well-known insight that humans are sensitive to features, which are discontinuities, or boundaries with sharp contrast. We improve texture quality by storing features in addition to the samples that are usually stored. We use resolution-independent representations of these features (lines and curves) so they are reproduced accurately at any level of magnification. We ensure that features are not blurred by interpolating only from reachable samples – those on the same side of all features. Kavita Bala, Cornell University
62
Kavita Bala, Cornell University
Quality Using standard texture map Here is an example of the improved quality we can get with FBTs. Using a standard texture map, the finite resolution yin yang will eventually get blurred and distorted, as shown here. Kavita Bala, Cornell University
63
Kavita Bala, Cornell University
Quality Using Feature-Based Texture After adding features and forming an FBT, we are able to reconstruct the yin yang boundaries sharply, while still maintaining the gradients in the texture. Kavita Bala, Cornell University
64
Kavita Bala, Cornell University
Results FBT 16x16 For our raster image results, our goal is to add extra data for improved reconstruction. The FBT resolution we choose is the same as the input image resolution so that we completely capture all of the sample data. This typically results in an FBT roughly twice as large as the input texture, depending on how many features have been added. In the flower we have captured the silhouette of the petals and added the ‘notch’ in the center; this detail is perceivable in the image, but is highly blurred in the standard texture map. Also, users of our system commented that the FBT flower output feels sharper overall, even though the green pigment has not been annotated. The stained glass example shown here has much more sharply defined edges when compared to the standard texture map. Bilinear 64x64 Kavita Bala, Cornell University
65
Kavita Bala, Cornell University
Results – 3D Models Artwork from Warcraft® III: Reign of Chaos™ provided courtesy of Blizzard Entertainment FBT 256x256 357 KB Bilinear 256x256 192 KB Here is another example. We extracted a skin from the game Warcraft III and annotated the runes with features. Notice the improved quality of the cloak as compared to the hem of her clothing, where we did not add features. This example is particularly relevant because although the user is free to zoom in on this model in the game, the designers have tried to make it difficult to do so because they are aware of the low texture and model quality, and this is precisely the situation we are trying to address. Kavita Bala, Cornell University
66
Kavita Bala, Cornell University
Results – 3D Models Artwork from Warcraft® III: Reign of Chaos™ provided courtesy of Blizzard Entertainment Here is another example. We extracted a skin from the game Warcraft III and annotated the runes with features. Notice the improved quality of the cloak as compared to the hem of her clothing, where we did not add features. This example is particularly relevant because although the user is free to zoom in on this model in the game, the designers have tried to make it difficult to do so because they are aware of the low texture and model quality, and this is precisely the situation we are trying to address. FBT 256x256 357 KB Bilinear 256x256 192 KB Kavita Bala, Cornell University
67
Kavita Bala, Cornell University
Projects Feature-based textures Lightcuts Edge-and-point rendering Cinematic relighting Kavita Bala, Cornell University
68
Kavita Bala, Cornell University
Multicore Identified hierarchies as key computational kernel for shading Automatic parallelization Issues Caching/Coherence Load balancing Kavita Bala, Cornell University
69
Kavita Bala, Cornell University
Future Work Add richer material representations Develop hierarchies for materials and lighting together What is visual complexity? Kavita Bala, Cornell University
70
Kavita Bala, Cornell University
Conclusions Scene complexity remains a challenge Need scalable solutions Capture what is visually important Multidimensional lightcuts end-to-end approach for illumination, transport and complex effects Feature-based textures: material Kavita Bala, Cornell University
71
Kavita Bala, Cornell University
Kavita Bala, Cornell University
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.