Presentation is loading. Please wait.

Presentation is loading. Please wait.

© 2005 University of Wisconsin

Similar presentations


Presentation on theme: "© 2005 University of Wisconsin"— Presentation transcript:

1 © 2005 University of Wisconsin
Last Time Tone Reproduction Photographically motivated methods Gradient compression techniques Perceptual issues 03/07/05 © 2005 University of Wisconsin

2 © 2005 University of Wisconsin
Today High Dynamic Range Environment Maps Image Based Rendering 03/07/05 © 2005 University of Wisconsin

3 © 2005 University of Wisconsin
Environment Maps Environment maps are infinitely distant area lights covering the hemisphere Maps were just given, with little talk of how they came to be Probably the most important rendering technique in film special effects Used when virtual imagery must be put into real filmed environments Allows the real environment to influence the character’s appearance 03/07/05 © 2005 University of Wisconsin

4 © 2005 University of Wisconsin
Capturing Maps Bring a highly reflective sphere along to the set Take multiple pictures of the ball Place the ball in important locations (which ones?) Take a few pictures around the ball (how many?) Go home, and stitch the pictures together, and re-project, to get a map 03/07/05 © 2005 University of Wisconsin

5 © 2005 University of Wisconsin
Example Images 03/07/05 © 2005 University of Wisconsin

6 © 2005 University of Wisconsin
Resulting Map Need to do a re-projection from image space to environment map coordinates 03/07/05 © 2005 University of Wisconsin

7 High-Dynamic Range Maps
The environment map needs higher dynamic range than the final film print Why? But cameras are themselves low dynamic range High dynamic range cameras are becoming available, but you can do better with a standard camera How do you get a high dynamic range image from a standard camera? 03/07/05 © 2005 University of Wisconsin

8 High Dynamic Range Imaging (Debevec and Malik, SIGGRAPH 1997)
Problem: Limited dynamic range of film or CCDs makes it impossible to capture high dynamic range in a single image Solution: Take multiple images at different exposures Problem: How do the pieces get put back together to form a single, composite image Made difficult because mapping from incoming radiance to pixel values is non-linear and poorly documented Solution: this paper Very influential for such a simple idea – used in lots of other papers Code is available 03/07/05 © 2005 University of Wisconsin

9 Solution: Capture Many Images
03/07/05 © 2005 University of Wisconsin

10 © 2005 University of Wisconsin
Quantities The output you see – pixel values – from a scanned film or digital camera, is some function of the scene irradiance: X is the product of irradiance and exposure time: Assuming the “principle of reciprocity”: double exposure and halving irradiance gives the same output, and vice versa Aim: recover f to allow inversion from observed values to scene irradiances Assumption: f is monotonic (surely true, or it’s a useless imaging device) 03/07/05 © 2005 University of Wisconsin

11 © 2005 University of Wisconsin
Input A set of images, indexed by j, with known exposure times: tj Call the observed value in image j at pixel i Zij Doing some math gives us an equation involving f and Ei: We want the g and Ei that best represent the given data (the images) 03/07/05 © 2005 University of Wisconsin

12 © 2005 University of Wisconsin
Solving Solve a linear least squares with the following objective: Terms for the function and its smoothness, plus weighting terms to give more credence to values with luminance in the mid-range of the dynamic range of the imaging system Gives results up to a scale, so set mid-range pixel to be unit radiance Don’t use all the values, just about 50 pixels (chosen by hand) and enough images to cover range 03/07/05 © 2005 University of Wisconsin

13 Results – Store Mapping
03/07/05 © 2005 University of Wisconsin

14 Results – Store Log Plot
03/07/05 © 2005 University of Wisconsin

15 © 2005 University of Wisconsin
Results – Church Input 03/07/05 © 2005 University of Wisconsin

16 Results – Church Rendering (Ward’s Histogram Method)
03/07/05 © 2005 University of Wisconsin

17 Image-Based Rendering
Geometry and light interaction may be difficult and expensive to model Imagine the complexity of modeling the exact geometry of carpet (as just one example) Image based rendering seeks to replace geometry and surface properties with images May or may not know the viewing parameters for the existing images Existing images may be photographs or computer generated renderings 03/07/05 © 2005 University of Wisconsin

18 © 2005 University of Wisconsin
e.g. Texture Mapping Use photographs to represent complex reflectance functions There are variants that seek to do better than standard texture mapping Store viewing directional specific information What sort of effects can you get? Store lighting specific information 03/07/05 © 2005 University of Wisconsin

19 © 2005 University of Wisconsin
Plenoptic Function Returns the radiance: passing through a given point, x in a given direction, (,) with given wavelength,  at a given time, t Many image-based rendering approaches can be cast as sampling from and reconstructing the plenoptic function Note, function is generally constant along segments of a line (assuming vacuum) 03/07/05 © 2005 University of Wisconsin

20 © 2005 University of Wisconsin
IBR Systems Methods differ in many ways: The range of new viewpoints allowed The density of input images The representation for samples (known images) The amount of user help required The amount of additional information required (such as intrinsic camera parameters) The method for gathering the input images 03/07/05 © 2005 University of Wisconsin

21 © 2005 University of Wisconsin
Movie-Map Approaches Film views from fixed locations, closely spaced, and store Storage can be an issue Allow the user to jump from location to location, and pan Appropriate images are retrieved from disk and displayed No re-projection – just uses nearest existing sample Still used in video games today, but with computer generated movies Which games (somewhat dated now)? 03/07/05 © 2005 University of Wisconsin

22 © 2005 University of Wisconsin
Quicktime VR (Chen, 1995) Movie-maps in software Construct panoramic images by stitching together a series of photographs Semi automatic process, based on correlation Scale/shift images so that they look most alike Works best with >50% overlap Finite set of panoramas – user jumps from one to the other The hard part is figuring out the projection to take points in panorama and reconstruct a planar perspective image 03/07/05 © 2005 University of Wisconsin

23 © 2005 University of Wisconsin
Results - Warping 03/07/05 © 2005 University of Wisconsin

24 © 2005 University of Wisconsin
Results - Stitching 03/07/05 © 2005 University of Wisconsin

25 View Interpolation (Chen and Williams, 1993)
Input: A set of synthetic images with known depth and camera parameters (location, focal length, etc) Computes optical flow maps relating each pair of images Optical flow map is the set of vectors describing where each point in the first image moves to in the second image Morphs between images by moving points along flow vectors Intermediate views are “real” views only in special cases 03/07/05 © 2005 University of Wisconsin

26 View Morphing (Seitz and Dyer, 1997)
Uses interpolation to generate new views such that the intermediate views represent real camera motion Observation: Interpolation gives incorrect intermediate views (not resulting from a real projection) 03/07/05 © 2005 University of Wisconsin

27 © 2005 University of Wisconsin
View Morphing Observation: Interpolation gives correct intermediate views if the initial and final images are parallel views 03/07/05 © 2005 University of Wisconsin

28 © 2005 University of Wisconsin
View Morphing Process Basic algorithm: User specifies a camera path that rotates and translates initial camera onto final camera Pre-warp input images to get them into parallel views Interpolate for intermediate view Post-warp to get final result 03/07/05 © 2005 University of Wisconsin

29 © 2005 University of Wisconsin
View Morphing Process Requires knowledge of projection matrices for the input images Found with vision algorithms. User may supply correspondences Intermediate motion can be specified by giving trajectories of four points 03/07/05 © 2005 University of Wisconsin

30 © 2005 University of Wisconsin
Next Time More image based rendering 03/07/05 © 2005 University of Wisconsin


Download ppt "© 2005 University of Wisconsin"

Similar presentations


Ads by Google