Download presentation
Presentation is loading. Please wait.
Published byBenedict McGee Modified over 6 years ago
1
A Geometry-Based Soft Shadow Volume Algorithm Using Graphics Hardware
Ulf Assarsson and Tomas Akenine-Möller SIGGRAPH 2003 Thank you (Wellcome everyone to this presentation of our paper ”A Geometry….”
2
We present: A Soft Shadow Volume Algorithm: Area light sources
Simple volumetric light sources Textures and short video textures as lights Real-time performance with programmable graphics hardware Approximate soft shadows Trade speed vs accuracy
3
Intro demo Introductory/initial/opening
Example of what it can look like. Frame rate is low here because this was captured using our old version. I have removed the shadows from the spheres. Note the self shadowing and shadows cast on the spheres.
4
A short recap… Area/volumetric lights give soft shadows
HARD SOFT Area/volumetric lights give soft shadows Real lights have area or volume Thus, soft shadows more realistic All area or volumetric lights give soft shadows. All real lights have some area or volume and therefore soft shadows are more realistic than hard shadows.
5
Overview Computation of visibility mask: 1st pass: Render hard shadow
This is an overview of our algorithm. First the scene is rendered to the frame buffer with only specular and diffuse lighting. Then we compute a visibility mask corresponding to the soft shadows. And this mask is then used to modulate the frame buffer. After this, we add an ambient pass, to achieve the final result. //render the scene again with only the ambient light and add the contribution to the frame buffer, to achieve the final result. Computation of visibility mask: 1st pass: Render hard shadow 2nd pass: compensate for overstated umbra
6
Hard vs. soft shadows Two different light source types: point source
umbra area source umbra penumbra An infinitely small point light source will give an instant transition from no shadow to full shadow. An area or volumetric light source will give rise to a penumbra region, where the transition is smooth.
7
A Real-Time Soft Shadow Volume Algorithm
This is what we capture in our algorithm. We create so called penumbra wedges around the penumbra region. These wedges are created from the silhouette edges as seen from the light source. The wedges are then rasterized using a pixel shader that computes the penumbra for geometry located inside the wedges. We have tried using simple approach based on depth inside the wedge – but does not work very well. Difficult situations appear for overlapping and intersecting wedges. Furthermore, it is often difficult to compute wedges robustly for certain edge situation. Another thing is that the true penumbra region inside a wedge has hyperboloids as front and back planes. Therefore, we need a more accurate approach. We have developed a method where the wedge for a silhouette edge only has to enclose the penumbra volume affected by the edge – it does not geometrically represent it. A key to the algorithm, and this was a hard problem, is that we split the shadow contribution among the relevant edges, so that each wedge gives a piece of penumbra contribution.
8
Wedges Each silhouette edge has a corresponding wedge
Provides a piece of penumbra contribution Rasterized by pixelshader
9
Two-pass algorithm
10
Visibility computation
Really want to compute how much we can see of the light source Assume that this red ball is a point in the penumbra, and we want to compute the shadow intensity at this point. The silhouette edges are outlined here in different colors. When we rasterize the wedge corresponding to the red edge, we provide a visibility as outlined here in red. When we rasterize the wedge corresponding to the green edge, we add a visibility corresponding to this green part. So we have introduced a virtual split of the visibility between the relevant edges (here the red and green one) – and that is the key to our algorithm. // Now, we split the shadow contribution between the relevant silhouette edges for this point. The visibility is split as shown to the left. This split is the key to our algorithm. The split has no physical meaning. It is just a trick and could be done pretty arbitrarilly. So, when rasterizing the wedge corresponding to the red edge, it will provide a visibility contribution of this red part – as outlined here to the right. The wedge for the green edge will add a visibility as outlined in green. //I will now demonstrate in more detail on another example.
11
Visibility computation
12
Precomputed contribution in 4D textures
contribution area 32 32 Here is a rectangular light source and a silhouette edge. We consider this edge to provide a visibility corresponding to the dark grey area. To compute the visibility fast with a pixel shader, we use precomputed lookup textures. The edge is clipped against the light source and we use the resulting edge vertices (x1,y1) and (x2,y2) to look up the percentage of the dark grey area into a 4D lookup texture. Now, assume that we use a texture as light source. Then we can precompute the 4D-lookup table with respect to the light source texture. So the lookup table will contain visibility or coverage values for each configuration of the clipped silhouette edge vertices, and provide a percentage value of the visible amount of red, green and blue light source texels. If we use a texture as a light source, we can compute the coverage texture with respect to this light source texture and thus we can handle
13
How the visibility computation works:
Assume that we are looking up onto the light source from a point to be shaded and that we see a large gray occluder. In this example, there are two silhouettes – A and B - that affects the visibility of the light source. in the middle and to the right, we can see their contribution to the visibility. The edge orientation with respect to the light source center determines the sign of the contribution. Therefore contribution A is subtracted and contribution B is added. And as can be seen in the bottom this gives the correct result. // Vi tittar upp mot ljuskällan o vi ser ett grått object som skuggar ljuskällan. Finns två silhouette edges som påverkar ljuskällan – A o B. I mitten o till höger ser man deras bidrag. The edge orientation with respect to the light source center determines the sign of the contribution. Therefore contribution A is subtracted and contribution B is added. And as can be seen in the bottom this gives the correct result.
14
Rasterize a wedge
15
A wedge for each silhouette edge…
A wedge enclosing a part of the penumbra region is generated for each silhouette edge.
16
A wedge for each silhouette edge…
They dont have to correspond exactly to the penumbra region – it is sufficient that they enclose it. And this is a major advantage of our algorithm, since the correct penumbra region can be complicated to compute.
17
A wedge for each silhouette edge…
18
A wedge for each silhouette edge…
19
A wedge for each silhouette edge…
20
Rasterizing the wedges
The scene is first rendered into the frame buffer and z-buffer. The umbra and penumbra contribution is then rasterized, wedge by wedge by our algorithm. The pixel shader reads out a point from the z-buffer and uses that point to compute a shadow contribution that is stored in a separate buffer. And this buffer is later used to modify the whole frame buffer to ”add” on the soft shadows to the image. /// %I have here visualized the rendering of the umbra contribution and penumbra contribution simultaneously. Typically, we use Crow’s shadow volume algorithm for hard shadows first to fill the umbra, and then compensate with our %penumbra pass.
21
Rasterizing the wedges
22
Rasterizing the wedges
And as they are rasterized, the final soft shadow will appear.
23
Rasterizing the wedges
24
Rasterizing the wedges
So how does the pixel shader work?
25
Examples using textured lights
Here are two other examples of textures as light sources. To the left, we have replaced 16 small area light sources with one large texture. We can see here that we get the typical banding effect that we would expect from this configuration. To the right is a more simple example, demonstrating that we get the behaviour that we would expect from this simple two-colored light source. If you look carefully, you can see some green selfshadowing on the womans legs. /// We can see that the result intuitively is correct. Left: The texture should provide at least 256 possible shadow levels, i.e., at least 256 pixels should be lit / non-black, which for a 32x32-texture correspond to 25 percent. Right: A more intuitive example showing expected behaviour. Colors here correspond to physics-since we use average color of light. Notice self shadowing. A more simple example deemonstrating that we get the expected behaviour. Texture of 16 area lights Texture of two colors
26
Fire Demo So, here is a video sequence, demonstrating an animated texture used as light source. // You may be able to see some noise in the shadows, which was due to precission problems. They are now identified and removed in our optimized version.
27
Fire Demo And here is a slightly better looking example, where I have exagerated the colors in the shadows. // You may be able to see some noise in the shadows, which was due to precission problems. They are now identified and removed in our optimized version.
28
Comparisons Reference image Our algorithm
Here are some other non-quake-monster examples… Reference image Our algorithm
29
Comparisons Here are some more examples… Reference image Our algorithm
30
Comparisons Small light source Large light source
Here are some more examples… Large light source
31
Comparisons 512 point lights Our algorithm
Here are some more examples… 512 point lights Our algorithm
32
The Last Demo (30 s)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.