© 2005 University of Wisconsin

Slides:



Advertisements
Similar presentations
Light Field Rendering Shijin Kong Lijie Heng.
Advertisements

Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
Computer Vision Optical Flow
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Homographies and Mosaics : Computational Photography Alexei Efros, CMU, Fall 2005 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen.
Lecture 7: Image Alignment and Panoramas CS6670: Computer Vision Noah Snavely What’s inside your fridge?
Image-Based Lighting : Computational Photography Alexei Efros, CMU, Fall 2005 © Eirik Holmøyvik …with a lot of slides donated by Paul Debevec.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Image Stitching and Panoramas
1Jana Kosecka, CS 223b Cylindrical panoramas Cylindrical panoramas with some slides from R. Szeliski, S. Seitz, D. Lowe, A. Efros,
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
Today: Calibration What are the camera parameters?
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
1/22/04© University of Wisconsin, CS559 Spring 2004 Last Time Course introduction Image basics.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
Recovering High Dynamic Range Radiance Maps from Photographs [Debevec, Malik - SIGGRAPH’97] Presented by Sam Hasinoff CSC2522 – Advanced Image Synthesis.
01/24/05© 2005 University of Wisconsin Last Time Raytracing and PBRT Structure Radiometric quantities.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Camera Geometry and Calibration Thanks to Martial Hebert.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
Computer Graphics 2 In the name of God. Outline Introduction Animation The most important senior groups Animation techniques Summary Walking, running,…examples.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Image-Based Lighting © Eirik Holmøyvik …with a lot of slides donated by Paul Debevec CS194: Image Manipulation & Computational Photography Alexei Efros,
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
Feature Matching. Feature Space Outlier Rejection.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Animation Animation is about bringing things to life Technically: –Generate a sequence of images that, when played one after the other, make things move.
Homographies and Mosaics : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Advanced Computer Graphics
- Introduction - Graphics Pipeline
Photorealistic Rendering vs. Interactive 3D Graphics
Image-based Lighting Computational Photography
donated by Paul Debevec
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
COSC579: Image Align, Mosaic, Stitch
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
© 2002 University of Wisconsin
CS451Real-time Rendering Pipeline
© University of Wisconsin, CS559 Fall 2004
Common Classification Tasks
Image Based Modeling and Rendering (PI: Malik)
© University of Wisconsin, CS559 Spring 2004
Computer Graphics Recitation 12.
Advanced Computer Vision
High Dynamic Range Images
(c) 2002 University of Wisconsin, CS 559
Image Stitching Computer Vision CS 678
(c) 2002 University of Wisconsin
CS5500 Computer Graphics May 29, 2006
Last Time Presentation of your game and idea Environment mapping
Computer Graphics Lecture 15.
Chapter 11: Stereopsis Stereopsis: Fusing the pictures taken by two cameras and exploiting the difference (or disparity) between them to obtain the depth.
Presentation transcript:

© 2005 University of Wisconsin Last Time Tone Reproduction Photographically motivated methods Gradient compression techniques Perceptual issues 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Today High Dynamic Range Environment Maps Image Based Rendering 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Environment Maps Environment maps are infinitely distant area lights covering the hemisphere Maps were just given, with little talk of how they came to be Probably the most important rendering technique in film special effects Used when virtual imagery must be put into real filmed environments Allows the real environment to influence the character’s appearance 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Capturing Maps Bring a highly reflective sphere along to the set Take multiple pictures of the ball Place the ball in important locations (which ones?) Take a few pictures around the ball (how many?) Go home, and stitch the pictures together, and re-project, to get a map 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Example Images 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Resulting Map Need to do a re-projection from image space to environment map coordinates 03/07/05 © 2005 University of Wisconsin

High-Dynamic Range Maps The environment map needs higher dynamic range than the final film print Why? But cameras are themselves low dynamic range High dynamic range cameras are becoming available, but you can do better with a standard camera How do you get a high dynamic range image from a standard camera? 03/07/05 © 2005 University of Wisconsin

High Dynamic Range Imaging (Debevec and Malik, SIGGRAPH 1997) Problem: Limited dynamic range of film or CCDs makes it impossible to capture high dynamic range in a single image Solution: Take multiple images at different exposures Problem: How do the pieces get put back together to form a single, composite image Made difficult because mapping from incoming radiance to pixel values is non-linear and poorly documented Solution: this paper Very influential for such a simple idea – used in lots of other papers Code is available 03/07/05 © 2005 University of Wisconsin

Solution: Capture Many Images 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Quantities The output you see – pixel values – from a scanned film or digital camera, is some function of the scene irradiance: X is the product of irradiance and exposure time: Assuming the “principle of reciprocity”: double exposure and halving irradiance gives the same output, and vice versa Aim: recover f to allow inversion from observed values to scene irradiances Assumption: f is monotonic (surely true, or it’s a useless imaging device) 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Input A set of images, indexed by j, with known exposure times: tj Call the observed value in image j at pixel i Zij Doing some math gives us an equation involving f and Ei: We want the g and Ei that best represent the given data (the images) 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Solving Solve a linear least squares with the following objective: Terms for the function and its smoothness, plus weighting terms to give more credence to values with luminance in the mid-range of the dynamic range of the imaging system Gives results up to a scale, so set mid-range pixel to be unit radiance Don’t use all the values, just about 50 pixels (chosen by hand) and enough images to cover range 03/07/05 © 2005 University of Wisconsin

Results – Store Mapping 03/07/05 © 2005 University of Wisconsin

Results – Store Log Plot 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Results – Church Input 03/07/05 © 2005 University of Wisconsin

Results – Church Rendering (Ward’s Histogram Method) 03/07/05 © 2005 University of Wisconsin

Image-Based Rendering Geometry and light interaction may be difficult and expensive to model Imagine the complexity of modeling the exact geometry of carpet (as just one example) Image based rendering seeks to replace geometry and surface properties with images May or may not know the viewing parameters for the existing images Existing images may be photographs or computer generated renderings 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin e.g. Texture Mapping Use photographs to represent complex reflectance functions There are variants that seek to do better than standard texture mapping Store viewing directional specific information What sort of effects can you get? Store lighting specific information 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Plenoptic Function Returns the radiance: passing through a given point, x in a given direction, (,) with given wavelength,  at a given time, t Many image-based rendering approaches can be cast as sampling from and reconstructing the plenoptic function Note, function is generally constant along segments of a line (assuming vacuum) 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin IBR Systems Methods differ in many ways: The range of new viewpoints allowed The density of input images The representation for samples (known images) The amount of user help required The amount of additional information required (such as intrinsic camera parameters) The method for gathering the input images 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Movie-Map Approaches Film views from fixed locations, closely spaced, and store Storage can be an issue Allow the user to jump from location to location, and pan Appropriate images are retrieved from disk and displayed No re-projection – just uses nearest existing sample Still used in video games today, but with computer generated movies Which games (somewhat dated now)? 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Quicktime VR (Chen, 1995) Movie-maps in software Construct panoramic images by stitching together a series of photographs Semi automatic process, based on correlation Scale/shift images so that they look most alike Works best with >50% overlap Finite set of panoramas – user jumps from one to the other The hard part is figuring out the projection to take points in panorama and reconstruct a planar perspective image 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Results - Warping 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Results - Stitching 03/07/05 © 2005 University of Wisconsin

View Interpolation (Chen and Williams, 1993) Input: A set of synthetic images with known depth and camera parameters (location, focal length, etc) Computes optical flow maps relating each pair of images Optical flow map is the set of vectors describing where each point in the first image moves to in the second image Morphs between images by moving points along flow vectors Intermediate views are “real” views only in special cases 03/07/05 © 2005 University of Wisconsin

View Morphing (Seitz and Dyer, 1997) Uses interpolation to generate new views such that the intermediate views represent real camera motion Observation: Interpolation gives incorrect intermediate views (not resulting from a real projection) 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin View Morphing Observation: Interpolation gives correct intermediate views if the initial and final images are parallel views 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin View Morphing Process Basic algorithm: User specifies a camera path that rotates and translates initial camera onto final camera Pre-warp input images to get them into parallel views Interpolate for intermediate view Post-warp to get final result 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin View Morphing Process Requires knowledge of projection matrices for the input images Found with vision algorithms. User may supply correspondences Intermediate motion can be specified by giving trajectories of four points 03/07/05 © 2005 University of Wisconsin

© 2005 University of Wisconsin Next Time More image based rendering 03/07/05 © 2005 University of Wisconsin