112/5/2015 12:54 Graphics II 91.547 Image Based Rendering Session 11.

Slides:



Advertisements
Similar presentations
1GR2-00 GR2 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Final Review of Rendering What We Did Not Cover Learning More...
Advertisements

Lecture 8 Transparency, Mirroring
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
3D Graphics Rendering and Terrain Modeling
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Lightfields, Lumigraphs, and Image-based Rendering.
A Multicamera Setup for Generating Stereo Panoramic Video Tzavidas, S., Katsaggelos, A.K. Multimedia, IEEE Transactions on Volume: 7, Issue:5 Publication.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Introduction to Volume Visualization Mengxia Zhu Fall 2007.
CS485/685 Computer Vision Prof. George Bebis
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
High-Quality Video View Interpolation
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Computational Photography Light Field Rendering Jinxiang Chai.
CSE473/573 – Stereo Correspondence
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
NVIDIA Lecture 10 Copyright  Pat Hanrahan Image-Based Rendering: 1st Wave Definition: Using images to enhance the realism of 3D graphics Brute Force in.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 19 Other Graphics Considerations Review.
What Does the Scene Look Like From a Scene Point? Donald Tanguay August 7, 2002 M. Irani, T. Hassner, and P. Anandan ECCV 2002.
Basics of Rendering Pipeline Based Rendering –Objects in the scene are rendered in a sequence of steps that form the Rendering Pipeline. Ray-Tracing –A.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image Based Rendering(IBR) Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang University, Hangzhou, China
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
Quadratic Surfaces. SPLINE REPRESENTATIONS a spline is a flexible strip used to produce a smooth curve through a designated set of points. We.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering an overview. 2 Photographs We have tools that acquire and tools that display photographs at a convincing quality level.
CS 395: Adv. Computer Graphics Light Fields and their Approximations Jack Tumblin
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Computer vision: models, learning and inference M Ahad Multiple Cameras
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Visible-Surface Detection Methods. To identify those parts of a scene that are visible from a chosen viewing position. Surfaces which are obscured by.
Introduction To IBR Ying Wu. View Morphing Seitz & Dyer SIGGRAPH’96 Synthesize images in transition of two views based on two images No 3D shape is required.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Rendering Pipeline Fall, 2015.
Steps Towards the Convergence of Graphics, Vision, and Video
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
© 2005 University of Wisconsin
Computer Vision Lecture 4: Color
Presentation transcript:

112/5/ :54 Graphics II Image Based Rendering Session 11

212/5/ :54 A Rendering Taxonomy

312/5/ :54 The Plenoptic Function “… the pencil of rays visible from any point in space, at any time, and over any range of wavelengths” Given a set of discrete samples (complete or incomplete) from the plenoptic function, the goal of image-based rendering is to generate a continuous representation of that function.

412/5/ :54 Movie Map (Lippman 1980) Find Nearest Sample Movie Storage Image

512/5/ :54 Taxonomy of “Virtual Camera” Movement (Chen et al. 1995) 0 Camera Rotation -Camera fixed at a particular location -Three rotational degrees of freedom =Pitch (up and down) =Yaw (about vertical axis) =Roll (about camera axis) 0 Object Rotation -Camera always pointing at center of object -Viewpoint constrained to move over surface of sphere -Three angular degrees of freedom 0 Camera movement -Viewpoint unconstrained -Viewing direction unconstrained

612/5/ :54 Environment Maps Map Geometries Cube SphereCylinder

712/5/ :54 Quick Time VR TM (Chen 1995) 2500 Pixels 768 Pixels 2500 x 768 = 1.9 G Pixels x 3 B/pixel = 5.8 GB 10:1 compression 500 MB/panorama

812/5/ :54 Image Distortion from Cylindrical Environment Map Projection Plane Cylindrical Environment Map Pre-warped Projection onto Plane

912/5/ :54 Quick Time VR Image Warping for Correct Perspective View

1012/5/ :54 Quick Time VR Panoramic Display Process Compressed Tiles Visible Tiles CD ROM or Hard Disk Main Memory Compressed Tiles Cache Visible Region Display Window Warp Decompress Offscreen Buffer

1112/5/ :54 Quick Time VR Accomplishing (Limited) Camera Motion

1212/5/ :54 Accomplishing Camera Motion Greene&Kass (1993 Apple Tech Doc.) 0 Regular 3-D lattice of cubic environment maps 0 Each environment map is a z-buffered rendering from a discrete viewpoint 0 Image from a new viewpoint is generated by re-sampling the environment map 0 Re-sampling involves rendering the pixels in the environment maps as 3-D polygons from the new viewpoint 0 Rendering time proportional to the environment map resolution but independent of scene complexity 0 Not suitable for real-time walkthrough performance on typical desktop computers (especially in 1993!)

1312/5/ :54 Alternative approach: Work entirely in Image Space 0 Sequence of images from closely spaced viewpoints is highly coherent 0 Depends upon the ability to establish a pixel-by-pixel correspondence between adjacent images -Can be computed if range data and camera parameters are known (true for rendered images) -For natural images, there are several techniques including manual user intervention 0 Pairwise correspondence between two images can be stored as a pair of morph maps -Bi-directional maps required because of possible many to one and one to many pixel correspondences 0 Can be represented by graph data structure where nodes are images and arcs are bi-directional morph maps

1412/5/ :54 N-Dimensional Graph Data Structure Image Bi-directional Morph Maps

1512/5/ :54 Simple View Interpolation Reference Image 1Reference Image 2 Corresponding Pixels Morph maps

1612/5/ :54 Image Overlap or Image Folding P1 P2 Reference View Interpolated View

1712/5/ :54 Image Holes or Image Stretching Interpolated View Reference View P1 P2

1812/5/ :54 Example of Hole Region Viewpoint 1Viewpoint 2

1912/5/ :54 Example of Hole Region Minimizing by Closely Spaced Viewpoints Viewpoint 1Viewpoint 2

2012/5/ :54 Source Image Viewed from Camera Moved to the Right Ref. View 1 Ref. View 2

2112/5/ :54 Offset Vectors for Camera Motion Morph Map

2212/5/ :54 Locus of Morph Map for Motion Parallel to Image Plane and Floor

2312/5/ :54 Distortion of Intermediate Images with Linear Warp Linear path of one feature

2412/5/ :54 Morphing Parallel Views Reference image Interpolated image

2512/5/ :54 View Interpolation: The Algorithm

2612/5/ :54 Example 1 of calculated intermediate images Reference Image 1Reference Image 2 Intermediate Views

2712/5/ :54 Example 2 of calculated intermediate images Reference Image 1Reference Image 2Interpolated Image

2812/5/ :54 Multiple-Center-of-Projection Images (Rademacher&Bishop 1998) 0 Information from a set of viewpoints stored in a single image 0 Features -Greater connectivity information compared with collections of standard images -Greater flexibility in the acquisition of image-based datasets, e.g. sampling different portions of the scene at different resolutions

2912/5/ :54 Multiple-Center-of-Projection Images Definition 0 A multiple-center-of-projection image consists of a two- dimensional image and a parameterized set of cameras meeting the following conditions: -The cameras must lie on either a continuous curve or a continuous surface -Each pixel is acquired by a single camera -Viewing rays vary continuously across neighboring pixels -Two neighboring pixels must either correspond to the same camera or to neighboring cameras -Each pixel contains range information

3012/5/ :54 MCOP Image

3112/5/ :54 Strip Camera used for Capture of Real MCOP Images

3212/5/ :54 Camera Path in Capturing MCOP Image of Castle

3312/5/ :54 Image Plane for Camera Motion

3412/5/ :54 Resulting 1000 x 500 MCOP Image

3512/5/ :54 Reprojection Camera model, stored per column: Center of projection Vector from to image plane origin Horizontal axis of viewing plane Vertical axis of viewing plane Disparity = distance from to the image plane divided by distance from to the pixel’s world space point Reprojection Formula:

3612/5/ :54 View of Castle Reconstructed from MCOP Image

3712/5/ :54 AnotherView of Castle Reconstructed from MCOP Image

3812/5/ :54 Lumigraphs 0 Lumigraph = a representation of the light resulting from a scene 0 Limited data representation of the plenoptic function 0 Generated from multiple images and camera “poses” 0 Rendering: Image = Lumigraph + Camera Model 0 Special case of 4D Light Field (Levoy, Hanrahan)

3912/5/ :54 What is a Lumigraph? For all points on the surrounding surface, For all directions, The color intensity of the ray. Assumption: We are outside a convex hull containing the objects

4012/5/ :54 Parameterization of the Lumigraph Images from Steven Gortler, SIGGRAPH 1999

4112/5/ :54 Building the Lumigraph

4212/5/ :54 Approximating the Lumigraph With Discrete Samples

4312/5/ :54 Views of a Light Field (Lumigraph) Levoy & Hanrahan, Light Field Rendering, Computer Graphics