Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.

Slides:



Advertisements
Similar presentations
Visualization of Urban Scenes from Video Footages Siyuan Fang Supervised by Dr. Neill Campbell 29/10/2007.
Advertisements

More on single-view geometry
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Interactive Deformation of Light Fields Billy Chen, Marc Levoy Stanford University Eyal Ofek, Heung-Yeung Shum Microsoft Research Asia To appear in the.
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
CAP4730: Computational Structures in Computer Graphics Visible Surface Determination.
Computer Graphics Visible Surface Determination. Goal of Visible Surface Determination To draw only the surfaces (triangles) that are visible, given a.
A Multicamera Setup for Generating Stereo Panoramic Video Tzavidas, S., Katsaggelos, A.K. Multimedia, IEEE Transactions on Volume: 7, Issue:5 Publication.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Global Illumination May 7, Global Effects translucent surface shadow multiple reflection.
Image or Object? Michael F. Cohen Microsoft Research.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Linear View Synthesis Using a Dimensionality Gap Light Field Prior
Computational Photography Light Field Rendering Jinxiang Chai.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Rendering with Concentric Mosaics Heung-Yeung Shum Li-Wei he Microsoft Research.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
Coordinate Systems X Y Z (conventional Cartesian reference system) X Y Z.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
Johannes Kopf Billy Chen Richard Szeliski Michael Cohen Microsoft Research Microsoft Microsoft Research Microsoft Research.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
University of Illinois at Chicago Electronic Visualization Laboratory (EVL) CS 426 Intro to 3D Computer Graphics © 2003, 2004, 2005 Jason Leigh Electronic.
Circular Motion Kinematics 8.01 W04D1. Today’s Reading Assignment: W04D1 Young and Freedman: 3.4; Supplementary Notes: Circular Motion Kinematics.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
1 Image Basics Hao Jiang Computer Science Department Sept. 4, 2014.
Image Based Rendering(IBR) Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang University, Hangzhou, China
Dynamically Reparameterized Light Fields Aaron Isaksen, Leonard McMillan (MIT), Steven Gortler (Harvard) Siggraph 2000 Presented by Orion Sky Lawlor cs497yzy.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Image-based rendering Michael F. Cohen Microsoft Research.
Multimedia Elements: Sound, Animation, and Video.
Digital Image and Video Coding 11. Basics of Video Coding H. Danyali
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Week 6 - Wednesday.  What did we talk about last time?  Light  Material  Sensors.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
1 Plenoptic Imaging Chong Chen Dan Schonfeld Department of Electrical and Computer Engineering University of Illinois at Chicago May
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Graphics Lecture 13: Slide 1 Interactive Computer Graphics Lecture 13: Radiosity - Principles.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Realtime NPR Toon and Pencil Shading Joel Jorgensen May 4, 2010.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Pixmotor: A Pixel Motion Integrator Ivan Neulander Rhythm & Hues.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Page 11/28/2016 CSE 40373/60373: Multimedia Systems Quantization  F(u, v) represents a DCT coefficient, Q(u, v) is a “quantization matrix” entry, and.
Video Compression and Standards
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Video Data Topic 4: Multimedia Technology. What is Video? A video is just a collection of bit-mapped images that when played quickly one after another.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Dynamic View Morphing performs view interpolation of dynamic scenes.
Image-Based Rendering
Image-Based Rendering
© 2005 University of Wisconsin
Coding Approaches for End-to-End 3D TV Systems
Real-time Volumetric Lighting in Participating Media
Presentation transcript:

Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman

Agenda 1)A general description of concentric mosaics 2)Rendering concentric mosaics 3)Capturing concentric mosaics 4)Some examples 5)Issues that still need to be resolved and future plans 6)A brief demo

Rendering Made Easy... Sort of Boo! Yay!  Problems with traditional rendering schemes  The appeal of image-based modeling and rendering  The plenoptic function

History of Plenoptic Functions

What is a Concentric Mosaic?  “A manifold mosaic”  A 3D plenoptic: radius, rotation angle, and vertical elevation  A 3D image built from a series of 360° slit images

Rendering a Novel View  Any point within the outermost circle can be the viewpoint  Rays tangent to the camera paths are used  Bilinear interpolation between neighboring mosaics can also be used

The Problem of Non-Planar Rays  Rays off the plane need to be approximated  Objects assumed to have an infinite depth  Vertical distortion is created

The Need for Depth Correction  Depth correction can fix the vertical distortion  3 types of depth correction exist

Full Perspective Correction  Individual corrections are made for each pixel  Exact depths of objects are necessary  Hole-filling problems are a complication  Excellent results are seen in synthetic scenes

Weak Perspective Correction  Corrections are made for each vertical line  Estimated depths are calculated  Vertical distortions can occur

Constant Depth Approximation  A constant depth is used  Users can control the assumed depth  Vertical distortions are produced if the wrong depth is given

Consequences of a 3D function  Vertical parallax is not captured  Much smaller data sets are required  Users can move in a circular region

Synthetic Mosaics  3D Studio Max can be used  Images are cut into slits  Depth values for each pixel can be found  Sampling is a bit tricky

How NOT to Do Real World Scenes  A series of single-slit cameras on a rotating beam  A single camera that can slide along a beam

The Lone Camera  A single off-centered camera sits on a rotary table  Regular images are taken  Multiple concentric mosaics can be recreated from one image

Ideal Solution  A single camera can produce distortion  A few tangential cameras along a beam can correct the problem

How the Pros Do It  An single ordinary digital video camera is used with a rotary table  The camera faces radially outward  1351 frames are captured in 90 seconds  The system is incredibly simple and efficient

The Lobby Scene 3 concentric mosaics from a lobby scene

Occlusion Occlusion is captured.

Horizontal Parallax Horizontal parallax is simulated quite well.

Lighting and Glare Spectacular lighting effects are easy to create.

Constant Depth Correction Revisited  Aspect ratios are maintained at the chosen depth  Objects at other depths are distorted

Point vs. Bilinear Sampling  Point sampling is twice as fast, but image quality is lower  Bilinear sampling is slower, but images are much smoother

Compression  Since adjacent frames are very similar, a majority of the data can be compressed.  Vector quantization and entropy coding allow the 415Mb original video to be shrunk to 16Mb.  MPEG4 compression can reduce the data size to 640k, but blocky artifacts are created.

Why Use Concentric Mosaics?  Quick and easy image capture  Parallax and specular highlights are preserved  Much smaller data sets than Lumigraphs  No messy geometry and lighting  User interaction is automatically incorporated

Future Endeavors  Correcting vertical distortion  Increasing the region of motion  Improving compression ratios

One Last Example

Demo

Mathematical Madness