Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
CSE473/573 – Stereo and Multiple View Geometry
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
3D Graphics Rendering and Terrain Modeling
SIGGRAPH ASIA 2011 Preview seminar Shading and Shadows.
Extracting Objects from Range and Radiance Images Computer Science Division University of California at Berkeley Computer Science Division University of.
Mapping: Scaling Rotation Translation Warp
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Dr. Hassan Foroosh Dept. of Computer Science UCF
Image Correspondence and Depth Recovery Gene Wang 4/26/2011.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Chapter 6: Vertices to Fragments Part 2 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley Mohan Sridharan Based on Slides.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Srikumar Ramalingam Department of Computer Science University of California, Santa Cruz 3D Reconstruction from a Pair of Images.
1 Jinxiang Chai CSCE 441 Computer Graphics. 2 Midterm Time: 10:10pm-11:20pm, 10/20 Location: HRBB 113.
Face Recognition Based on 3D Shape Estimation
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
High-Quality Video View Interpolation
3D Hand Pose Estimation by Finding Appearance-Based Matches in a Large Database of Training Views
3D from multiple views : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Jianbo Shi.
Image-Based Rendering using Hardware Accelerated Dynamic Textures Keith Yerex Dana Cobzas Martin Jagersand.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
CSE473/573 – Stereo Correspondence
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
1 Occlusion Culling ©Yiorgos Chrysanthou, , Anthony Steed, 2004.
Automatic Camera Calibration
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
A HIGH RESOLUTION 3D TIRE AND FOOTPRINT IMPRESSION ACQUISITION DEVICE FOR FORENSICS APPLICATIONS RUWAN EGODA GAMAGE, ABHISHEK JOSHI, JIANG YU ZHENG, MIHRAN.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Computer Graphics An Introduction. What’s this course all about? 06/10/2015 Lecture 1 2 We will cover… Graphics programming and algorithms Graphics data.
Interactively Modeling with Photogrammetry Pierre Poulin Mathieu Ouimet Marie-Claude Frasson Dép. Informatique et recherche opérationnelle Université de.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
1 Howard Schultz, Edward M. Riseman, Frank R. Stolle Computer Science Department University of Massachusetts, USA Dong-Min Woo School of Electrical Engineering.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
03/15/03© 2005 University of Wisconsin Where We’ve Been Photo-realistic rendering –Accurate modeling and rendering of light transport and surface reflectance.
CSE 140: Computer Vision Camillo J. Taylor Assistant Professor CIS Dept, UPenn.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Non-Photorealistic Rendering FORMS. Model dependent Threshold dependent View dependent Outline form of the object Interior form of the object Boundary.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
Volume Rendering (3) Hardware Texture Mapping Methods.
A Plane-Based Approach to Mondrian Stereo Matching
Real-Time Soft Shadows with Adaptive Light Source Sampling
3D Graphics Rendering PPT By Ricardo Veguilla.
Computer Graphics.
Common Classification Tasks
Image Based Modeling and Rendering (PI: Malik)
Visibility Computations
Presentation transcript:

Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science Division University of California at Berkeley SIGGRAPH 2000 Course on Image-Based Surface Details

Yizhou Yu Basic Steps Acquire Photographs Recover Geometry Align Photographs with Geometry Map Photographs onto Geometry Acquire Photographs Recover Geometry Align Photographs with Geometry Map Photographs onto Geometry

Yizhou Yu Camera Pose Estimation Input –Known geometry recovered from photographs or laser range scanners –A set of photographs taken with a camera Output –For each photograph, 3 rotation and 3 translation parameters of the camera with respect to the geometry Requirement –4 correspondences between each photograph and the geometry Input –Known geometry recovered from photographs or laser range scanners –A set of photographs taken with a camera Output –For each photograph, 3 rotation and 3 translation parameters of the camera with respect to the geometry Requirement –4 correspondences between each photograph and the geometry

Yizhou Yu Recover Camera Pose with Known Correspondences Least-squares solution –Needs good initial estimation from human interaction Camera Image

Yizhou Yu Recover Rotation Parameters only from Known Correspondences Constraints Least-squares solution Camera Image

Yizhou Yu Obtaining Correspondences Feature Detection in 3D geometry and 2D images Human interaction –Interactively pick corresponding points in photographs and 3D geometry Automatic Search –Combinatorial search Feature Detection in 3D geometry and 2D images Human interaction –Interactively pick corresponding points in photographs and 3D geometry Automatic Search –Combinatorial search

Yizhou Yu Automatic Search for Correspondences Pose estimation using calibration targets Combinatorial search for the best match –4 correspondences each image Pose estimation using calibration targets Combinatorial search for the best match –4 correspondences each image 3D Targets

Yizhou Yu Camera Pose Results Accuracy: consistently within 2 pixels Texture-mapping a single image

Yizhou Yu Texture Mapping Conventional texture-mapping with texture coordinates Projective texture-mapping Conventional texture-mapping with texture coordinates Projective texture-mapping

Yizhou Yu Texture Map Synthesis I Conventional Texture- Mapping with Texture Coordinates –Create a triangular texture patch for each triangle –The texture patch is a weighted average of the image patches from multiple photographs –Pixels that are close to image boundaries or viewed from a grazing angle obtain smaller weights Photograph Texture Map 3D Triangle

Yizhou Yu Texture Map Synthesis II Allocate space for texture patches from texture maps –Generalization of memory allocation to 2D –Quantize edge length to a power of 2 –Sort texture patches into decreasing order and use First-Fit strategy to allocate space Allocate space for texture patches from texture maps –Generalization of memory allocation to 2D –Quantize edge length to a power of 2 –Sort texture patches into decreasing order and use First-Fit strategy to allocate space First-Fit

Yizhou Yu A Texture Map Packed with Triangular Texture Patches

Yizhou Yu Texture-Mapping and Object Manipulation Original ConfigurationNovel Configuration

Yizhou Yu Texture Map Compression I The size of each texture patch is determined by the amount of color variations on its corresponding triangles in photographs. An edge detector (the derivative of the Gaussian) is used as a metric for variations. The size of each texture patch is determined by the amount of color variations on its corresponding triangles in photographs. An edge detector (the derivative of the Gaussian) is used as a metric for variations.

Yizhou Yu Texture Map Compression II Reuse texture patches –Map the same patch to multiple 3D triangles with similar color variations K-means clustering to generate texture patch representatives Larger penalty along triange edges to reduce Mach Band effect Texture Map 3D Triangles

Yizhou Yu Synthetic Images with Compressed and Uncompressed Texture Maps Uncompressed 20 texture maps Compressed 5 texture maps 20 texture maps5 texture maps

Yizhou Yu Projective Texture-Mapping Can directly use the original photographs in texture-mapping Visibility processing is more complicated Projective texture-mapping has been implemented in hardware, therefore, real-time rendering becomes possible View-dependent effects can be added by effectively using hardware accumulation buffer Can directly use the original photographs in texture-mapping Visibility processing is more complicated Projective texture-mapping has been implemented in hardware, therefore, real-time rendering becomes possible View-dependent effects can be added by effectively using hardware accumulation buffer

Yizhou Yu Motivation for Visibility Processing: Artifacts Caused by Hardware Camera Image Geometry Texture gets projected onto occluded and backfacing polygons

Yizhou Yu Visibility Algorithms Image-space algorithms –Shadow buffer –Ray casting Object-space algorithms –Weiler-Atherton Image-space algorithms –Shadow buffer –Ray casting Object-space algorithms –Weiler-Atherton

Yizhou Yu A Hybrid Visibility Algorithm Occlusion testing in image-space using Z- buffer hardware –Render polygons with their identifiers as colors –Retrieve occluding polygons’ ids from color buffer Object-space shallow clipping to generate fewer polygons Occlusion testing in image-space using Z- buffer hardware –Render polygons with their identifiers as colors –Retrieve occluding polygons’ ids from color buffer Object-space shallow clipping to generate fewer polygons

Yizhou Yu Input Photographs and Recovered Geometry from Facade

Yizhou Yu Visibility Processing Results The towerThe rest of the campus

Yizhou Yu Synthetic Renderings

Yizhou Yu

Camera Image Needs good initial estimation

Yizhou Yu Camera Image