A Geometry-Based Soft Shadow Volume Algorithm Using Graphics Hardware

Slides:



Advertisements
Similar presentations
Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
Advertisements

Exploration of advanced lighting and shading techniques
An Optimized Soft Shadow Volume Algorithm with Real-Time Performance Ulf Assarsson 1, Michael Dougherty 2, Michael Mounier 2, and Tomas Akenine-Möller.
Technische Universität München Computer Graphics SS 2014 Graphics Effects Rüdiger Westermann Lehrstuhl für Computer Graphik und Visualisierung.
Exploration of bump, parallax, relief and displacement mapping
Optimized Stencil Shadow Volumes
Optimized Stencil Shadow Volumes
Game Programming 09 OGRE3D Lighting/shadow in Action
Computer graphics & visualization Global Illumination Effects.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Two Methods for Fast Ray-Cast Ambient Occlusion Samuli Laine and Tero Karras NVIDIA Research.
Real-Time Rendering SPEACIAL EFFECTS Lecture 03 Marina Gavrilova.
Rasterization and Ray Tracing in Real-Time Applications (Games) Andrew Graff.
A Survey of Real-time Soft Shadows Algorithms Speaker: Alvin Date: 2003/7/23 EUROGRAPHICS 2003 J.-M. Hasenfratz, M. Lapierre, N. Holzschuch and F.X. Sillion.
Shadow Silhouette Maps Pradeep Sen, Mike Cammarano, Pat Hanrahan Stanford University.
A Real-Time Soft Shadow Volume Algorithm DTU Vision days Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology Sweden
(conventional Cartesian reference system)
Shadow Silhouette Maps Pradeep Sen Mike Cammarano Pat Hanrahan Stanford University Speaker: Alvin Date: 8/24/2003 SIGGRAPH 2003.
A Geometry-based Soft Shadow Volume Algorithm using Graphics Hardware Speaker: Alvin Date:2003/7/23 SIGGRAPH 2003 Ulf Assarsson Tomas Akenine-Moller.
Shadow Algorithms Gerald Matzka Computer Science Seminar.
Approximate Soft Shadows on Arbitrary Surfaces using Penumbra Wedges Tomas Akenine-Möller Ulf Assarsson Department of Computer Engineering, Chalmers University.
Creating soft shadows Computer Graphics methods Submitted by: Zusman Dimitry.
Introduction to 3D Graphics John E. Laird. Basic Issues u Given a internal model of a 3D world, with textures and light sources how do you project it.
Computer Graphics Shadows
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Shadow Algorithms Ikrima Elhassan.
Modelling and Simulation Types of Texture Mapping.
CS 376 Introduction to Computer Graphics 04 / 16 / 2007 Instructor: Michael Eckmann.
CHAPTER 11 Shadows © 2008 Cengage Learning EMEA. LEARNING OBJECTIVES In this chapter you will learn about: – –Shadow rendering algorithms – –Blinn’s shadow.
1 Shadows (2) ©Anthony Steed Overview n Shadows – Umbra Recap n Penumbra Analytical v. Sampling n Analytical Aspect graphs Discontinuity meshing.
Shadows. Shadows is important in scenes, consolidating spatial relationships “Geometric shadows”: the shape of an area in shadow Early days, just pasted.
Taku KomuraComputer Graphics Local Illumination and Shading Computer Graphics – Lecture 10 Taku Komura Institute for Perception, Action.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Collaborative Visual Computing Lab Department of Computer Science University of Cape Town Graphics Topics in VR By Shaun Nirenstein.
Real-time Graphics for VR Chapter 23. What is it about? In this part of the course we will look at how to render images given the constrains of VR: –we.
Sample Based Visibility for Soft Shadows using Alias-free Shadow Maps Erik Sintorn – Ulf Assarsson – uffe.
Global Illumination. Local Illumination  the GPU pipeline is designed for local illumination  only the surface data at the visible point is needed to.
Visual Appearance Chapter 4 Tomas Akenine-Möller Department of Computer Engineering Chalmers University of Technology.
Single Pass Point Rendering and Transparent Shading Paper by Yanci Zhang and Renato Pajarola Presentation by Harmen de Weerd and Hedde Bosman.
Bounding Volume Hierarchy. The space within the scene is divided into a grid. When a ray travels through a scene, it only passes a few boxes within the.
Local Illumination and Shading
Soft Shadow Volumes for Ray Tracing
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
1 Shadow Rendering Techniques: Hard and Soft Author: Jamiur Rahman Supervisor: Mushfiqur Rouf Department of CSE BRAC University.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Shuen-Huei Guan Seminar in CMLab, NTU
1© 2009 Autodesk Hardware Shade – Presenting Your Designs Hardware and Software Shading HW Shade Workflow Tessellation Quality Settings Lighting Settings.
Compositing and Rendering
- Introduction - Graphics Pipeline
Real-Time Soft Shadows with Adaptive Light Source Sampling
Color Color is one of the most interesting aspects of both human perception and computer graphics. In principle, a display needs only three primary colors.
Photorealistic Rendering vs. Interactive 3D Graphics
Visual Appearance Chapter 4
Deferred Lighting.
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
CS451Real-time Rendering Pipeline
Jim X. Chen George Mason University
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
(c) 2002 University of Wisconsin
Introduction to Computer Graphics with WebGL
Real-time Rendering Shadow Maps
UMBC Graphics for Games
A Hierarchical Shadow Volume Algorithm
Patrick Cozzi University of Pennsylvania CIS Fall 2012
Frame Buffer Applications
Presentation transcript:

A Geometry-Based Soft Shadow Volume Algorithm Using Graphics Hardware Ulf Assarsson and Tomas Akenine-Möller SIGGRAPH 2003 Thank you (Wellcome everyone to this presentation of our paper ”A Geometry….”

We present: A Soft Shadow Volume Algorithm: Area light sources Simple volumetric light sources Textures and short video textures as lights Real-time performance with programmable graphics hardware Approximate soft shadows Trade speed vs accuracy

Intro demo Introductory/initial/opening Example of what it can look like. Frame rate is low here because this was captured using our old version. I have removed the shadows from the spheres. Note the self shadowing and shadows cast on the spheres.

A short recap… Area/volumetric lights give soft shadows HARD SOFT Area/volumetric lights give soft shadows Real lights have area or volume Thus, soft shadows more realistic All area or volumetric lights give soft shadows. All real lights have some area or volume and therefore soft shadows are more realistic than hard shadows.

Overview Computation of visibility mask: 1st pass: Render hard shadow This is an overview of our algorithm. First the scene is rendered to the frame buffer with only specular and diffuse lighting. Then we compute a visibility mask corresponding to the soft shadows. And this mask is then used to modulate the frame buffer. After this, we add an ambient pass, to achieve the final result. //render the scene again with only the ambient light and add the contribution to the frame buffer, to achieve the final result. Computation of visibility mask: 1st pass: Render hard shadow 2nd pass: compensate for overstated umbra

Hard vs. soft shadows Two different light source types: point source umbra area source umbra penumbra An infinitely small point light source will give an instant transition from no shadow to full shadow. An area or volumetric light source will give rise to a penumbra region, where the transition is smooth.

A Real-Time Soft Shadow Volume Algorithm This is what we capture in our algorithm. We create so called penumbra wedges around the penumbra region. These wedges are created from the silhouette edges as seen from the light source. The wedges are then rasterized using a pixel shader that computes the penumbra for geometry located inside the wedges. We have tried using simple approach based on depth inside the wedge – but does not work very well. Difficult situations appear for overlapping and intersecting wedges. Furthermore, it is often difficult to compute wedges robustly for certain edge situation. Another thing is that the true penumbra region inside a wedge has hyperboloids as front and back planes. Therefore, we need a more accurate approach. We have developed a method where the wedge for a silhouette edge only has to enclose the penumbra volume affected by the edge – it does not geometrically represent it. A key to the algorithm, and this was a hard problem, is that we split the shadow contribution among the relevant edges, so that each wedge gives a piece of penumbra contribution.

Wedges Each silhouette edge has a corresponding wedge Provides a piece of penumbra contribution Rasterized by pixelshader

Two-pass algorithm

Visibility computation Really want to compute how much we can see of the light source Assume that this red ball is a point in the penumbra, and we want to compute the shadow intensity at this point. The silhouette edges are outlined here in different colors. When we rasterize the wedge corresponding to the red edge, we provide a visibility as outlined here in red. When we rasterize the wedge corresponding to the green edge, we add a visibility corresponding to this green part. So we have introduced a virtual split of the visibility between the relevant edges (here the red and green one) – and that is the key to our algorithm. // Now, we split the shadow contribution between the relevant silhouette edges for this point. The visibility is split as shown to the left. This split is the key to our algorithm. The split has no physical meaning. It is just a trick and could be done pretty arbitrarilly. So, when rasterizing the wedge corresponding to the red edge, it will provide a visibility contribution of this red part – as outlined here to the right. The wedge for the green edge will add a visibility as outlined in green. //I will now demonstrate in more detail on another example.

Visibility computation

Precomputed contribution in 4D textures contribution area 32 32 Here is a rectangular light source and a silhouette edge. We consider this edge to provide a visibility corresponding to the dark grey area. To compute the visibility fast with a pixel shader, we use precomputed lookup textures. The edge is clipped against the light source and we use the resulting edge vertices (x1,y1) and (x2,y2) to look up the percentage of the dark grey area into a 4D lookup texture. Now, assume that we use a texture as light source. Then we can precompute the 4D-lookup table with respect to the light source texture. So the lookup table will contain visibility or coverage values for each configuration of the clipped silhouette edge vertices, and provide a percentage value of the visible amount of red, green and blue light source texels. If we use a texture as a light source, we can compute the coverage texture with respect to this light source texture and thus we can handle

How the visibility computation works: Assume that we are looking up onto the light source from a point to be shaded and that we see a large gray occluder. In this example, there are two silhouettes – A and B - that affects the visibility of the light source. in the middle and to the right, we can see their contribution to the visibility. The edge orientation with respect to the light source center determines the sign of the contribution. Therefore contribution A is subtracted and contribution B is added. And as can be seen in the bottom this gives the correct result. // Vi tittar upp mot ljuskällan o vi ser ett grått object som skuggar ljuskällan. Finns två silhouette edges som påverkar ljuskällan – A o B. I mitten o till höger ser man deras bidrag. The edge orientation with respect to the light source center determines the sign of the contribution. Therefore contribution A is subtracted and contribution B is added. And as can be seen in the bottom this gives the correct result.

Rasterize a wedge

A wedge for each silhouette edge… A wedge enclosing a part of the penumbra region is generated for each silhouette edge.

A wedge for each silhouette edge… They dont have to correspond exactly to the penumbra region – it is sufficient that they enclose it. And this is a major advantage of our algorithm, since the correct penumbra region can be complicated to compute.

A wedge for each silhouette edge…

A wedge for each silhouette edge…

A wedge for each silhouette edge…

Rasterizing the wedges The scene is first rendered into the frame buffer and z-buffer. The umbra and penumbra contribution is then rasterized, wedge by wedge by our algorithm. The pixel shader reads out a point from the z-buffer and uses that point to compute a shadow contribution that is stored in a separate buffer. And this buffer is later used to modify the whole frame buffer to ”add” on the soft shadows to the image. /// %I have here visualized the rendering of the umbra contribution and penumbra contribution simultaneously. Typically, we use Crow’s shadow volume algorithm for hard shadows first to fill the umbra, and then compensate with our %penumbra pass.

Rasterizing the wedges

Rasterizing the wedges And as they are rasterized, the final soft shadow will appear.

Rasterizing the wedges

Rasterizing the wedges So how does the pixel shader work?

Examples using textured lights Here are two other examples of textures as light sources. To the left, we have replaced 16 small area light sources with one large texture. We can see here that we get the typical banding effect that we would expect from this configuration. To the right is a more simple example, demonstrating that we get the behaviour that we would expect from this simple two-colored light source. If you look carefully, you can see some green selfshadowing on the womans legs. /// We can see that the result intuitively is correct. Left: The texture should provide at least 256 possible shadow levels, i.e., at least 256 pixels should be lit / non-black, which for a 32x32-texture correspond to 25 percent. Right: A more intuitive example showing expected behaviour. Colors here correspond to physics-since we use average color of light. Notice self shadowing. A more simple example deemonstrating that we get the expected behaviour. Texture of 16 area lights Texture of two colors

Fire Demo So, here is a video sequence, demonstrating an animated texture used as light source. // You may be able to see some noise in the shadows, which was due to precission problems. They are now identified and removed in our optimized version.

Fire Demo And here is a slightly better looking example, where I have exagerated the colors in the shadows. // You may be able to see some noise in the shadows, which was due to precission problems. They are now identified and removed in our optimized version.

Comparisons Reference image Our algorithm Here are some other non-quake-monster examples… Reference image Our algorithm

Comparisons Here are some more examples… Reference image Our algorithm

Comparisons Small light source Large light source Here are some more examples… Large light source

Comparisons 512 point lights Our algorithm Here are some more examples… 512 point lights Our algorithm

The Last Demo (30 s)