Image Based Modeling and Rendering (PI: Malik) Capture photorealistic models of real world scenes that can support interaction Vary viewpoint Vary lighting Vary scene configuration This objective was achieved by combining and inventing new techniques in computer vision and graphics.
Image-based Modeling and Rendering 1st Generation---- vary viewpoint but not lighting Acquire photographs Recover geometry (explicit or implicit) Texture map This line of research led to the much-acclaimed Campanile movie directed by Paul Debevec where the central campus of UC Berkeley was modeled using photographs and used to generate photorealistic flybys.
Image-based Modeling and Rendering Photographs are not Reflectance Maps ! 2nd Generation---- vary viewpoint and lighting for non-diffuse scenes Recover geometry Recover reflectance properties Render using light transport simulation Illumination Radiance Reflectance First generation IBMR techniques typically use texture mapping to simulate surface appearance. The problem is that texture maps are the product of the illumination and reflectance. Hence they do not permit us to simulate appearance under novel lighting conditions.
Image-based Modeling 2nd Generation---- vary viewpoint and lighting Recover geometry & reflectance properties Render using light transport simulation or local shading Original Lighting & Viewpoint Novel Lighting & Viewpoint
Inverse Global Illumination (Yu et al, SIGGRAPH 99) Reflectance Properties Radiance Maps We start with multiple high dynamic range photographs (radiance maps) of a scene from different viewpoints and under a few different illumination conditions. Using the previously described techniques, we recover a geometric model of the environment and the positions of the light sources. The Inverse Global Illumination algorithm uses the Radiance maps, geometry of the scene, and the positions of the light sources as input to recover the reflectance properties of the different surfaces in the scene. The challenge, first addressed in this work, was to be able to solve this inverse problem in the presence of mutual illumination effects and allowing for non-diffuse reflectances. Light Sources Geometry
Real vs. Synthetic The top row shows real photographs (see lens flare in right most photograph). The bottom row contains synthesized images.
Real vs. Synthetic The top images are real photographs (see e.g lens flare in left image). The bottom images were synthetically generated.
Key contributions in Inverse Global Illumination A digital camera can undertake all the data acquisition tasks involved. Both specular and high resolution diffuse reflectance properties can be recovered from photographs. Reflectance recovery can re-render non-diffuse real scenes under novel illumination as well as from novel viewpoints.
Image-based Modeling and Rendering 3rd Generation--Vary spatial configurations in addition to viewpoint and lighting We want to have the capability to move objects around in the environment seen in the left image, such as putting a copy of the sofa upside down from the ceiling. While this is trivial in traditional graphics, Image-based rendering techniques find such manipulations difficult. Novel Viewpoint Novel Viewpoint & Configuration
Framework Input Output Multiple range scans of a scene Multiple photographs of the same scene Output Geometric meshes of each object in the scene Registered texture maps for objects
Segmentation Results [Yu, Ferencz and Malik TVCG ’01] The different pseudo-colors show different depth surfaces automatically segmented by the algorithm. The left image is for objects in a room, the right image is for an external façade.
Models of Individual Objects These were recovered by our system following automatic segmentation.
Texture-Mapping and Object Manipulation Given that we have a segmented scene the objects( such as the sofa) can be replicated and manipulated independently. This gives one the capability of authoring for virtual reality training scenarios.
Transitions Façade has inspired several image-based modeling packages including Canoma from Metacreations. Technique for registering range and radiance images being used by Cyra, a leading time of flight laser scanner company.
Continuing Challenges Finding correspondences between widely separated camera views automatically Models of reflectance and texture for natural materials and objects The first problem is that of fully automating the geometric recovery process. The key subproblem here is that of finding correspondences automatically, with high accuracy. Teller’s project has made significant progress by making clever use of a pose camera; this line of work needs to be pushed further. The inverse global illumination technique for recovering reflectances and texture relies on suitable models; good models for outdoor scenery—terrain, vegetation etc remain a challenge.