Image Based Modeling and Rendering (PI: Malik)

Slides:



Advertisements
Similar presentations
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Advertisements

Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Vision Sensing. Multi-View Stereo for Community Photo Collections Michael Goesele, et al, ICCV 2007 Venus de Milo.
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
Extracting Objects from Range and Radiance Images Computer Science Division University of California at Berkeley Computer Science Division University of.
ATEC Procedural Animation Introduction to Procedural Methods in 3D Computer Animation Dr. Midori Kitagawa.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
1Notes  Assignment 1 is out, due October 12  Inverse Kinematics  Evaluating Catmull-Rom splines for motion curves  Wednesday: may be late (will get.
Image-Based Modeling, Rendering, and Lighting
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Image-Based Lighting : Computational Photography Alexei Efros, CMU, Fall 2005 © Eirik Holmøyvik …with a lot of slides donated by Paul Debevec.
Computing With Images: Outlook and applications
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Materials II Lavanya Sharan March 2nd, Computational thinking about materials Physics-basedPseudo physics-based.
Computational Photography Light Field Rendering Jinxiang Chai.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
A New Correspondence Algorithm Jitendra Malik Computer Science Division University of California, Berkeley Joint work with Serge Belongie, Jan Puzicha,
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Integration Of CG & Live-Action For Cinematic Visual Effects by Amarnath Director, Octopus Media School.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Interactive Virtual Relighting and Remodelling of Real Scenes C. Loscos 1, MC. Frasson 1,2,G. Drettakis 1, B. Walter 1, X. Granier 1, P. Poulin 2 (1) iMAGIS*
Shading. What is Shading? Assigning of a color to a pixel in the final image. So, everything in shading is about how to select and combine colors to get.
Image-based rendering Michael F. Cohen Microsoft Research.
Interactively Modeling with Photogrammetry Pierre Poulin Mathieu Ouimet Marie-Claude Frasson Dép. Informatique et recherche opérationnelle Université de.
16421: Vision Sensors Lecture 7: High Dynamic Range Imaging Instructor: S. Narasimhan Wean 5312, T-R 1:30pm – 3:00pm.
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Image-Based Lighting © Eirik Holmøyvik …with a lot of slides donated by Paul Debevec CS194: Image Manipulation & Computational Photography Alexei Efros,
Rendering Synthetic Objects into Real Scenes: Bridging Traditional and Image-based Graphics with Global Illumination and High Dynamic Range Photography.
Image-Based Rendering of Diffuse, Specular and Glossy Surfaces from a Single Image Samuel Boivin and André Gagalowicz MIRAGES Project.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Rendering Synthetic Objects into Real- World Scenes by Paul Debevec SIGGRAPH 98 Conference Presented By Justin N. Rogers for Advanced Comp Graphics Spring.
Interreflections : The Inverse Problem Lecture #12 Thanks to Shree Nayar, Seitz et al, Levoy et al, David Kriegman.
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Hybrid Algorithms K. H. Ko School of Mechatronics Gwangju Institute.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Digital Image Processing CSC331
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Chapter 10: Computer Graphics
Advanced Computer Graphics
Photorealistic Rendering vs. Interactive 3D Graphics
Aspects of Game Rendering
donated by Paul Debevec
Rogerio Feris 1, Ramesh Raskar 2, Matthew Turk 1
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
Chapter 10: Computer Graphics
Interactive Computer Graphics
© 2005 University of Wisconsin
Common Classification Tasks
(c) 2002 University of Wisconsin
Lighting.
(c) 2002 University of Wisconsin
CS5500 Computer Graphics May 29, 2006
Computer Graphics Lecture 15.
20 November 2019 Output maps Normal Diffuse Roughness Specular
Presentation transcript:

Image Based Modeling and Rendering (PI: Malik) Capture photorealistic models of real world scenes that can support interaction Vary viewpoint Vary lighting Vary scene configuration This objective was achieved by combining and inventing new techniques in computer vision and graphics.

Image-based Modeling and Rendering 1st Generation---- vary viewpoint but not lighting Acquire photographs Recover geometry (explicit or implicit) Texture map This line of research led to the much-acclaimed Campanile movie directed by Paul Debevec where the central campus of UC Berkeley was modeled using photographs and used to generate photorealistic flybys.

Image-based Modeling and Rendering Photographs are not Reflectance Maps ! 2nd Generation---- vary viewpoint and lighting for non-diffuse scenes Recover geometry Recover reflectance properties Render using light transport simulation Illumination Radiance Reflectance First generation IBMR techniques typically use texture mapping to simulate surface appearance. The problem is that texture maps are the product of the illumination and reflectance. Hence they do not permit us to simulate appearance under novel lighting conditions.

Image-based Modeling 2nd Generation---- vary viewpoint and lighting Recover geometry & reflectance properties Render using light transport simulation or local shading Original Lighting & Viewpoint Novel Lighting & Viewpoint

Inverse Global Illumination (Yu et al, SIGGRAPH 99) Reflectance Properties Radiance Maps We start with multiple high dynamic range photographs (radiance maps) of a scene from different viewpoints and under a few different illumination conditions. Using the previously described techniques, we recover a geometric model of the environment and the positions of the light sources. The Inverse Global Illumination algorithm uses the Radiance maps, geometry of the scene, and the positions of the light sources as input to recover the reflectance properties of the different surfaces in the scene. The challenge, first addressed in this work, was to be able to solve this inverse problem in the presence of mutual illumination effects and allowing for non-diffuse reflectances. Light Sources Geometry

Real vs. Synthetic The top row shows real photographs (see lens flare in right most photograph). The bottom row contains synthesized images.

Real vs. Synthetic The top images are real photographs (see e.g lens flare in left image). The bottom images were synthetically generated.

Key contributions in Inverse Global Illumination A digital camera can undertake all the data acquisition tasks involved. Both specular and high resolution diffuse reflectance properties can be recovered from photographs. Reflectance recovery can re-render non-diffuse real scenes under novel illumination as well as from novel viewpoints.

Image-based Modeling and Rendering 3rd Generation--Vary spatial configurations in addition to viewpoint and lighting We want to have the capability to move objects around in the environment seen in the left image, such as putting a copy of the sofa upside down from the ceiling. While this is trivial in traditional graphics, Image-based rendering techniques find such manipulations difficult. Novel Viewpoint Novel Viewpoint & Configuration

Framework Input Output Multiple range scans of a scene Multiple photographs of the same scene Output Geometric meshes of each object in the scene Registered texture maps for objects

Segmentation Results [Yu, Ferencz and Malik TVCG ’01] The different pseudo-colors show different depth surfaces automatically segmented by the algorithm. The left image is for objects in a room, the right image is for an external façade.

Models of Individual Objects These were recovered by our system following automatic segmentation.

Texture-Mapping and Object Manipulation Given that we have a segmented scene the objects( such as the sofa) can be replicated and manipulated independently. This gives one the capability of authoring for virtual reality training scenarios.

Transitions Façade has inspired several image-based modeling packages including Canoma from Metacreations. Technique for registering range and radiance images being used by Cyra, a leading time of flight laser scanner company.

Continuing Challenges Finding correspondences between widely separated camera views automatically Models of reflectance and texture for natural materials and objects The first problem is that of fully automating the geometric recovery process. The key subproblem here is that of finding correspondences automatically, with high accuracy. Teller’s project has made significant progress by making clever use of a pose camera; this line of work needs to be pushed further. The inverse global illumination technique for recovering reflectances and texture relies on suitable models; good models for outdoor scenery—terrain, vegetation etc remain a challenge.