Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sources, Shadows, and Shading

Similar presentations


Presentation on theme: "Sources, Shadows, and Shading"— Presentation transcript:

1 Sources, Shadows, and Shading
EECS 274 Computer Vision Sources, Shadows, and Shading

2 Surface brightness Depends on local surface properties (albedo), surface shape (normal), and illumination Shading model: a model of how brightness of a surface is obtained Can interpret pixel values to reconstruct its shape and albedo Reading: FP Chapter5, H Chapter 11

3 Radiometric properties of sources
How bright (or what color) are objects? One more definition: Exitance of a light source is the internally generated power (not reflected) radiated per unit area on the radiating surface Similar to radiosity: a source can have both radiosity, because it reflects exitance, because it emits Independent of its exit angle Internally generated energy radiated per unit time, per unit area But what aspects of the incoming radiance will we model? Point, line, area source Simple geometry At this point, mention that for most surfaces E is zero, and that the first thing to do is determine how much radiosity a source produces at a surface. We’ll work only with Lambertian surfaces here.

4 Radiosity due to a point sources
small, distant sphere radius e and exitance E, which is far away subtends solid angle Typo in figure, d  r

5 Radiosity due to a point source
As r is increased, the rays leaving the surface patch and striking the sphere move closer evenly, and the collection changes only slightly, i.e., diffusive reflectance, or albedo Radiosity due to source The important point to make here is that the integral is over a small domain on the input hemisphere of the patch, and the function doesn’t change much over that domain. This means that, rather roughly, it’s equal to one value times the area of the patch.

6 Nearby point source model
The angle term can be written in terms of N and S N is the surface normal ρd is diffuse albedo S is source vector - a vector from P to the source, whose length is the intensity term, ε2E works because a dot-product is basically a cosine We now fold the unknown exitance term into the source vector, and get a nearby point source model. Mention that very often in the literature the 1/r^2 term is omitted. This is *wrong* radiometrically, but can lead to better results for reasons we won’t see for several slides.

7 Point source at infinity
Issue: nearby point source gets bigger if one gets closer the sun doesn’t for any reasonable binding of closer Assume that all points in the model are close to each other with respect to the distance to the source Then the source vector doesn’t vary much, and the distance doesn’t vary much either, and we can roll the constants together to get:

8 Line sources The only reason to mention a line source here is as part of a general principle. Point sources look smaller in both directions on the viewing hemisphere as one moves away, so they behave like 1/r^2; line sources look smaller in one direction, *but not in the other* (at least, if they’re long enough) and so behave like 1/r. Area sources don’t get smaller, which is why they’re important. radiosity due to line source varies with inverse distance, if the source is long enough

9 Area sources Examples: diffuser boxes, white walls.
The radiosity at a point due to an area source is obtained by adding up the contribution over the section of view hemisphere subtended by the source change variables and add up over the source

10 Radiosity due to an area source
ρd is albedo E is exitance r is distance between points Q and P Q is a coordinate on the source I talk about what’s going on here as an explicit change of variables. We’re going from an integral over the incoming hemisphere to an integral over the source.

11 Shading models Local shading model Global shading model
Surface has radiosity due only to sources visible at each point Advantages: often easy to manipulate, expressions easy supports quite simple theories of how shape information can be extracted from shading Global shading model Surface radiosity is due to radiance reflected from other surfaces as well as from surfaces Advantages: usually very accurate Disadvantage: extremely difficult to infer anything from shading values It is crucial to distinguish between two things here: what light does what one wishes to model We’ve been talking about the first. Now we have to worry about the second. Recall that our original shading model summed radiance due to all incoming radiance --- but what incoming radiance counts? that from sources only (which is a local shading model) or that reflected from other passive reflectors (which is a global shading model). I usually point out here that local shading models are wrong, but quite easy to manipulate and global shading models are right, but very difficult to manage.

12 Local shading models For point sources at infinity:
For point sources not at infinity

13 Shadows cast by a point source
A point that can’t see the source is in shadow (self cast shadow) For point sources, the geometry is simple

14 Area source shadows Are sources do not produce dark
shadows with crisp boundaries Out of shadow Penumbra (“almost shadow”) Umbra (“shadow”) 1 is out of shadow 2 is penumbra and 3 is umbra I point out the vast number of soft shadows that one sees, and the fact that if there are lots of area sources, there may be no umbra and, in fact, the penumbra is small and difficult to spot, leading to a situation where there are few shadows. Most students have never asked themselves why they see so few shadows indoors.

15 Photometric stereo Assume: A local shading model
A set of point sources that are infinitely distant A set of pictures of an object, obtained in exactly the same camera/object configuration but using different sources A Lambertian object (or the specular component has been identified and removed)

16 Monge patch Projection model for surface recovery - Monge patch
In computer vision, it is often known as height map , depth map, or dense depth map

17 Image model For each point source, we know the source vector (by assumption) We assume we know the scaling constant of the linear camera (i.e., intensity value is linear in the surface radiosity) Fold the normal and the reflectance into one vector g, and the scaling constant and source vector into another Vj Out of shadow: g(x,y): describes the surface Vj: property of the illumination and of the camera In shadow:

18 From many views From n sources, for each of which Vi is known
For each image point, stack the measurements Solve least squares problem to obtain g One linear system per point

19 Dealing with shadows Known Known Known Unknown
Comment that this trick doesn’t work if the shadows are not dead black (e.g. some area sources in the background). Known Known Known Unknown

20 Recovering normal and reflectance
Given sufficient sources, we can solve the previous equation (e.g., least squares solution) for g(x, y) Recall that g(x, y) =r (x,y) N(x, y) , and N(x, y) is the unit normal This means that r(x,y) =|g(x, y)| This yields a check If the magnitude of g(x, y) is greater than 1, there’s a problem And N(x, y) = g(x, y) / r(x,y)

21 Five synthetic images

22 Recovered reflectance
Here brightest white is 1 and darkest black is zero |g(x,y)|=ρ(x,y): the value should be in the range of 0 and 1

23 Recovered normal field
I’ve mapped the normals onto the surface so that one can see what is going on --- hence the hedgehog appearance.

24 Recovering a surface from normals
Recall the surface is written as Parametric surface This means the normal has the form: If we write the known vector g as Then we obtain values for the partial derivatives of the surface:

25 Recovering a surface from normals (cont’d)
Recall that mixed second partials are equal --- this gives us a check. We must have: (or they should be similar, at least) We can now recover the surface height at any point by integration along some path, e.g. Usually, I point out that this is a line integral, and that the curve chosen is irrelevant; then I ask why the second point is true. This gives students a sense of the main essence of integrability, which we don’t deal with in much detail in the book.

26 Recovered surface by integration

27 The Illumination Cone x x x
What is the set of n-pixel images of an object under all possible lighting conditions (at fixed pose)? (Belhuemuer and Kriegman IJCV 99) x 2 x 1 Single light source image N-dimensional Image Space x n

28 The Illumination Cone x x
What is the set of n-pixel images of an object under all possible lighting conditions (but fixed pose)? Proposition: Due to the superposition of images, the set of images is a convex polyhedral cone in the image space. Illumination Cone N-dimensional Image Space x 1 2 2-light source image Single light source images: Extreme rays of cone x n

29 Generating the Illumination Cone
For Lambertian surfaces, the illumination cone is determined by the 3D linear subspace B(x,y), where When no shadows, then Use least-squares to find 3D linear subspace, subject to the constraint fxy=fyx (Georghiades, Belhumeur, Kriegman, PAMI, June, 2001) 3D linear subspace a(x,y) fx(x,y) fy(x,y) albedo (surface normals) Surface. f (x,y) (albedo textured mapped on surface) Original (Training) Images

30 Image-based rendering: Cast Shadows
Single Light Source Face Movie

31 Yale face database B 10 Individuals 64 Lighting Conditions 9 Poses
=> 5,760 Images Variable lighting

32 Curious experimental fact
Prepare two rooms, one with white walls and white objects, one with black walls and black objects Illuminate the black room with bright light, the white room with dim light People can tell which is which (due to Gilchrist) Why? (a local shading model predicts they can’t).

33 Global shading models Can adjust so that local shading model predicts these pictures will be indistinguishable A view of a white room with white objects. We see a cross-section of the image intensity corresponding to the line drawn on the image. A view of a black room with black objects. We see a cross-section of the image intensity corresponding to the line drawn on the image.

34 What’s going on here? Local shading model is a poor description of physical processes that give rise to images because surfaces reflect light onto one another This is a major nuisance; the distribution of light (in principle) depends on the configuration of every radiator; big distant ones are as important as small nearby ones (solid angle) The effects are easy to model It appears to be hard to extract information from these models

35 Interreflections - a global shading model
Other surfaces are now area sources - this yields: Vis(x, u) is 1 if they can see each other, 0 if they can’t

36 What do we do about this? Attempt to build approximations
Ambient illumination Study qualitative effects reflexes decreased dynamic range smoothing Try to use other information to control errors

37 Ambient illumination Two forms
Add a constant to the radiosity at every point in the scene to account for brighter shadows than predicted by point source model Advantages: simple, easily managed (e.g. how would you change photometric stereo?) Disadvantages: poor approximation (compare black and white rooms Add a term at each point that depends on the size of the clear viewing hemisphere at each point (see next slide) Advantages: appears to be quite a good approximation, but jury is out Disadvantages: difficult to work with


Download ppt "Sources, Shadows, and Shading"

Similar presentations


Ads by Google