03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Dr. Hassan Foroosh Dept. of Computer Science UCF
View Morphing (Seitz & Dyer, SIGGRAPH 96)
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Uncalibrated Geometry & Stratification Sastry and Yang
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Plenoptic Modeling: An Image-Based Rendering System
Image Stitching and Panoramas
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
Computational Photography Light Field Rendering Jinxiang Chai.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
02/14/02(c) University of Wisconsin 2002, CS 559 Last Time Filtering Image size reduction –Take the pixel you need in the output –Map it to the input –Place.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Computer vision: models, learning and inference
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Virtual Environments with Light Fields and Lumigraphs CS April 2002 Presented by Mike Pilat.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
Image-based rendering Michael F. Cohen Microsoft Research.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering an overview. 2 Photographs We have tools that acquire and tools that display photographs at a convincing quality level.
CS 395: Adv. Computer Graphics Light Fields and their Approximations Jack Tumblin
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Feature Matching. Feature Space Outlier Rejection.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Photo VR Editor: A Panoramic and Spherical Environment Map Authoring Tool for Image-Based VR Browsers Jyh-Kuen Horng, Ming Ouhyoung Communications and.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
CS559: Computer Graphics Lecture 9: 3D Transformation and Projection Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Image-Based Rendering
© 2005 University of Wisconsin
Announcements Project 2 out today (help session at end of class)
Image Stitching Computer Vision CS 678
Presentation transcript:

03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro

03/12/03© 2003 University of Wisconsin Today Several Image-Based-Rendering (IBR) systems

03/12/03© 2003 University of Wisconsin IBR Systems Methods differ in many ways: –The range of new viewpoints allowed –The density of input images –The representation for samples (known images) –The amount of user help required –The amount of additional information required (such as intrinsic camera parameters) –The method for gathering the input images

03/12/03© 2003 University of Wisconsin Movie-Map Approaches Film views from fixed locations, closely spaced, and store –Storage can be an issue Allow the user to jump from location to location, and pan Appropriate images are retrieved from disk and displayed No re-projection – just uses nearest existing sample Still used in video games today, but with computer generated movies –Which games (somewhat dated now)?

03/12/03© 2003 University of Wisconsin Quicktime VR (Chen, 1995) Movie-maps in software Construct panoramic images by stitching together a series of photographs –Semi automatic process, based on correlation Scale/shift images so that they look most alike –Works best with >50% overlap Finite set of panoramas – user jumps from one to the other

03/12/03© 2003 University of Wisconsin Results - Warping

03/12/03© 2003 University of Wisconsin Results - Stitching

03/12/03© 2003 University of Wisconsin View Interpolation (Chen and Williams, 1993) Input: A set of synthetic images with known depth and camera parameters (location, focal length, etc) Computes optical flow maps relating each pair of images –Optical flow map is the set of vectors describing where each point in the first image moves to in the second image Morphs between images by moving points along flow vectors Intermediate views are “real” views only in special cases

03/12/03© 2003 University of Wisconsin View Morphing (Seitz and Dyer, 1997) Uses interpolation to generate new views such that the intermediate views represent real camera motion Observation: Interpolation gives incorrect intermediate views (not resulting from a real projection)

03/12/03© 2003 University of Wisconsin View Morphing Observation: Interpolation gives correct intermediate views if the initial and final images are parallel views

03/12/03© 2003 University of Wisconsin View Morphing Process Basic algorithm: –User specifies a camera path that rotates and translates initial camera onto final camera –Pre-warp input images to get them into parallel views –Interpolate for intermediate view –Post-warp to get final result

03/12/03© 2003 University of Wisconsin View Morphing Process Requires knowledge of projection matrices for the input images –Found with vision algorithms. User may supply correspondences Intermediate motion can be specified by giving trajectories of four points

03/12/03© 2003 University of Wisconsin Plenoptic Modeling (McMillan and Bishop, 1995) Input: Set of panoramic images gathered by panning a video camera about its vertical optical axis on a tripod The algorithm: –Determines the properties of each camera and registers the images associated with each pan –Determines the relative locations of each camera –Determines the depth of each point seen by multiple cameras –Generates new views by reconstructing the plenoptic function from the available samples

03/12/03© 2003 University of Wisconsin Fitting Each Camera Problem: –Given several images all taken with the same camera rotated about its optical center –Determine the camera parameters –Determine the angle between each image Approach: –Multi-stage optimization –First stage estimates angle between images and focal length –Second stage refines and gets remaining parameters When done, can map a pixel in one image to its correct position in any other images

03/12/03© 2003 University of Wisconsin Locating the Cameras Given cylindrical projections (panoramic images) for two cameras User identifies tie-points (points seen in both images) Each tie-point defines two rays – one from the center of each camera through the tie-point –These rays should intersect at the world location for the point Minimization step finds the camera locations and some other parameters that minimize the perpendicular distance between all the rays

03/12/03© 2003 University of Wisconsin Determining Disparity The minimization algorithm gives us the world location of the tie-points, but what about the rest of the points in the image? Use standard computer vision techniques to find the remaining disparities –Disparity is the angular difference between the locations of a point in two images –Directly related to the depth of the point Makes heavy use of the epipolar constraint

03/12/03© 2003 University of Wisconsin The Epipolar Constraint The location of a point in one image constrains it to lie on a ray in space passing through the camera and image point This ray projects to a curve in the second image –Line for planar projection –Sine curve for cylindrical projection Since the point must lie on the ray in world space, it must lie on the curve in the second image Reduces the search for correspondences to a linear one along the line

03/12/03© 2003 University of Wisconsin Panoramas

03/12/03© 2003 University of Wisconsin Reconstructing a New View The disparity from a known cylinder pair can be transferred to a new cylinder, and then re-projected onto a plane (the image) –Can all be done in one step Have to resolve occlusion problems –Two points from the reference image could map to the same point in the output image –Solution: Define a fill ordering that guarantees correct occlusion (an important contribution of this paper) Also have to fill holes

03/12/03© 2003 University of Wisconsin Rendering Order

03/12/03© 2003 University of Wisconsin Working from Photos Image-based rendering obviously relies heavily on computer vision techniques –Particularly: Depth from stereo –The problem is very hard with real images –These techniques are not perfect! Sampling remains a problem –Images tend to appear blurry –Relatively little work on reconstruction algorithms

03/12/03© 2003 University of Wisconsin Light Field Rendering or Lumigraphs Aims: –Sample the plenoptic function, or light field, densely –Store the samples in a data structure that is easy to access –Rendering is simply averaging of samples The plenoptic function gives the radiance passing through a point in space in a particular direction In free space: Gives the radiance along a line –Recall that radiance is constant along a line

03/12/03© 2003 University of Wisconsin Storing Light Fields Each sample of the light field represents radiance along a line Required operations: –Store the radiance associated with an oriented line –Look up the radiance of lines that are “close” to a desired line Hence, we need some way of describing, or parameterizing, oriented lines –A line is a 4D object –There are several possible parameterizations

03/12/03© 2003 University of Wisconsin Parameterizing Oriented Lines Desirable properties: –Efficient conversion from lines to parameters –Control over which subset of lines is of interest –Ease of uniform sampling of lines in space Parameterize lines by their intersection with two planes in arbitrary positions –Take (s,t) as intersection of line in one plane, (u,v) as intersection in other: L(s,t,u,v) –Light Slab: use two quadrilaterals (squares) and restrict each of s,t,u,v to (0,1)

03/12/03© 2003 University of Wisconsin A Slab

03/12/03© 2003 University of Wisconsin Line Space An alternate parameterization is line space –Better for looking at subset of lines and verifying sampling patterns –In 2D, parameterize lines by their angle with the x-axis, and their perpendicular distance form the origin –Extension to 3D is straightforward Every line in space maps to a point in line space, and vice versa –The two spaces are dual –Some operations are much easier in one space than the other

03/12/03© 2003 University of Wisconsin Verifying Sampling Patterns

03/12/03© 2003 University of Wisconsin Capturing Light Fields Render synthetic images Capture digitized photographs –Use a gantry to carefully control which images are captured Makes it easy to control the light field sampling pattern Hard to build the gantry –Use a video camera Easy to acquire the images Hard to control the sampling pattern

03/12/03© 2003 University of Wisconsin Render synthetic images Decide which line you wish to sample, and cast a ray, or Render an array of images from points on the (u,v) plane – pixels in the images are points on the (s,t) plane Antialiasing is essential, both in (s,t) and (u,v) –Standard anitaliasing and aperture filtering

03/12/03© 2003 University of Wisconsin Tightly Controlled Capture Use a computer controlled gantry to move a camera to fixed positions and take digital images Looks in at an object from outside –Must acquire multiple slabs to get full coverage –Care must be taken with camera alignment and optics Object is rotated in front of gantry to get multiple slabs –Must ensure lighting moves with the object Effectively samples light field on a regular grid, so rendering is easier

03/12/03© 2003 University of Wisconsin Capture from Hand Held Video Place the object on a calibrated stage –Colored to allow blue-screening –Markers to allow easy determination of camera pose Wave the camera around in front of the object –Map to help guide where more samples are required Camera must be calibrated beforehand Output: A large number of non-uniform samples Problem: Have to re-sample to get regular sampling for rendering

03/12/03© 2003 University of Wisconsin Re-Sampling the Light Field Basic problem: –Input: The set of irregular samples from the video capture process –Output: Estimates of the radiance on a regular grid in parameter space Algorithm outline: –Use a multi-resolution algorithm to estimate radiance in under- sampled regions –Use a binning algorithm to uniformly resample without bias

03/12/03© 2003 University of Wisconsin Compression Light fields samples must be dense for good rendering Dense light fields are big: 1.6GB –When rendering, samples could come from any part of the light field –All of the light field must be in memory for real-time rendering –But lots of data redundancy, so compression should do well Desirable compression scheme properties: –Random access to compressed data –Asymmetric – slow compression, fast decompression

03/12/03© 2003 University of Wisconsin Compression Scheme Vector Quantization: –Compression: Choose a codebook of reproduction vectors Replace all the vectors in the data with the index into the “nearest” vector in the codebook –Storage: The codebook plus the indexes –Decompression: Replace each index with the vector from the codebook Follow up with Lempel-Ziv entropy encoding (gzip) –Decompress into memory

03/12/03© 2003 University of Wisconsin Alternate Compression Schemes Neighboring “images” in (u,v) are likely to be very similar –Picture doesn’t change much as you move the camera a little –We know what the camera motion is –BRDF changes smoothly for many cases –Use MPEG or similar to encode a sequence of images –This has been discussed but not implemented “Textures” should compress well –Use hardware rendering from compressed textures

03/12/03© 2003 University of Wisconsin Rendering Ray-tracing: For each pixel in the image: –Determine the ray passing through the eye and the pixel –Interpolate the radiance along that ray from the nearest rays in the light-field Texture Mapping: –Finding the (u,v) and (s,t) coordinates is exactly the texture mapping operation –Use graphics hardware to do the job, or write a software texture mapper (maybe faster – only have to texture map two polygons) Use various interpolation schemes to control aliasing

03/12/03© 2003 University of Wisconsin Exploiting Geometry When using the video capture approach, build a geometric model –Use a volume carving technique When determining the “nearest” samples for rendering, use the geometry to choose better samples This has been further extended: –Surface point used for improving sampling determines focus –By default, we want focus at the object, so use the object geometry –Using other surfaces gives depth of field and variable focus

03/12/03© 2003 University of Wisconsin Surface Light Fields Instead of storing the complete light-field, store only lines emanating from the surface –Parameterize the surface mesh (standard technique) –Choose sample points on the surface –Sample the space of rays leaving the surface from those points –When rendering, look up nearby sample points and appropriate sample rays Best for rendering complex BRDF models –An example of view dependent texturing

03/12/03© 2003 University of Wisconsin Summary Light-fields capture very dense representations of the plenoptic function –Fields can be stitched together to give walkthroughs –The data requirements are large –Sampling still not dense enough – filtering introduces blurring Next time: Using domain specific knowledge