Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

DEVELOPMENT OF A COMPUTER PLATFORM FOR OBJECT 3D RECONSTRUCTION USING COMPUTER VISION TECHNIQUES Teresa C. S. Azevedo João Manuel R. S. Tavares Mário A.
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Recent work in image-based rendering from unstructured image collections and remaining challenges Sudipta N. Sinha Microsoft Research, Redmond, USA.
Patch to the Future: Unsupervised Visual Prediction
1 Panoramic University of Amsterdam Informatics Institute.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Reconstructing Building Interiors from Images Yasutaka Furukawa Brian Curless Steven M. Seitz University of Washington, Seattle, USA Richard Szeliski Microsoft.
A Multicamera Setup for Generating Stereo Panoramic Video Tzavidas, S., Katsaggelos, A.K. Multimedia, IEEE Transactions on Volume: 7, Issue:5 Publication.
Reconstructing Building Interiors from Images Yasutaka Furukawa Brian Curless Steven M. Seitz University of Washington, Seattle, USA 2011/01/16 蔡禹婷.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Plenoptic Stitching: A Scalable Method for Reconstructing 3D Interactive Walkthroughs Daniel G. Aliaga Ingrid Carlbom
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Multi-view stereo Many slides adapted from S. Seitz.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
High-Quality Video View Interpolation
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Image Stitching and Panoramas
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Mosaics CSE 455, Winter 2010 February 8, 2010 Neel Joshi, CSE 455, Winter Announcements  The Midterm went out Friday  See to the class.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Symmetric Architecture Modeling with a Single Image
Peter Sturm INRIA Grenoble – Rhône-Alpes (Institut National de Recherche en Informatique et Automatique) An overview of multi-view stereo and other topics.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Image-based rendering Michael F. Cohen Microsoft Research.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Multi-View Stereo : A Parametric Study Number of images, their angular separation and the like.
Asian Institute of Technology
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Senior Project Poster Day 2005, CIS Dept. University of Pennsylvania Surface Reconstruction from Feature Based Stereo Mickey Mouse, Donald Duck Faculty.
IIIT HYDERABAD Image-based walkthroughs from partial and incremental scene reconstructions Kumar Srijan Syed Ahsan Ishtiaque C. V. Jawahar Center for Visual.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Temporally Coherent Completion of Dynamic Shapes AUTHORS:HAO LI,LINJIE LUO,DANIEL VLASIC PIETER PEERS,JOVAN POPOVIC,MARK PAULY,SZYMON RUSINKIEWICZ Presenter:Zoomin(Zhuming)
Scene Reconstruction Seminar presented by Anton Jigalin Advanced Topics in Computer Vision ( )
112/5/ :54 Graphics II Image Based Rendering Session 11.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Announcements No midterm Project 3 will be done in pairs same partners as for project 2.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
- Introduction - Graphics Pipeline
Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
COSC579: Image Align, Mosaic, Stitch
Segmentation Based Environment Modeling Using a Single Image
Real Time Dense 3D Reconstructions: KinectFusion (2011) and Fusion4D (2016) Eleanor Tursman.
© 2005 University of Wisconsin
Structure from motion Input: Output: (Tomasi and Kanade)
Noah Snavely.
Structure from motion Input: Output: (Tomasi and Kanade)
Lecture 15: Structure from motion
Presentation transcript:

Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC /11/09

Abstract We automatically construct immersive virtual spaces for human agent interaction – Scenes are drawn by external photo images – Depth maps are reconstructed to express occlusion – Rough 3D models are added for agents – Processing time is about 4.7 days to reconstruct a 20m×20m virtual space 2

Introduction We want to observe HHI using HAI in a virtual space For example, for a virtual sightseeing task: – we can select faraway place such as foreign country – we can easily prepare an environment to observe Our Goal: creating a system to construct an environment for such a task 3

Introduction To do the sightseeing task and observe interaction, the environment should look like the real world – virtual spaces should be immersive – scenes recreated by real world photos are needed – spatial relationship between agent and object should be correct – users can walk freely on some level How to construct such a virtual space? 4

Related Work Model Based Rendering (MBR) – can reconstruct 3D models – make arbitrary consistent views easily – weak at trees or texture-less surfaces [1-3] are good methods but, – [1] can’t use outside scenes because they use Manhattan World Assumption – [2,3] need expensive equipment or lots of time and effort [1] Furukawa et al. 2010, Reconstructing build-ing interiors from images [2] Pollefeys et al. 2008, Detailed real-time urban 3d reconstruction from video [3] Ikeuchi et al, 2004, Bayon digital archival project 5

Related Work Image Based Rendering (IBR) – make a new viewpoint image by interpolation – draw clearly complex structures such as natural objects – weak at occlusion [4-5] have good image quality but, – they don’t consider agents – movable space is restricted [4]Google Street View [5] Ibuki, 2009, Reduction of Unnatural Feeling in Free-viewpoint Rendering Using View-Dependent Deformable 3-D Mesh Model (Japanese) 6

Our Method To make the immersive environment, we use IBR – because high image quality is needed to show the scene – use panorama images and omnidirectional display to show environment 7

Our Method To collect photo images – divide a space in into a 1-2m grid – shoot about 18 photos in each grid We use interpolation when moving from one shooting point to another obstacle shooting point shooting direction 1-2 meter 8

Our Method 3D geometry is needed for agents – use Structure from Motion and stereo method in a similar way [1,5] – create depth map for occlusion between objects and agents This information is used for better IBR – camera position & rotation – 3D position of a point cloud 9

System Pipeline depth map segmented image camera parameter Photos Structure from Motion Segmentation Creating Depth Map Show a Immersive Virtual Space interpolated image Interpolation : Input : Process : Output panorama image panorama depth map panorama image panorama depth map Creating Panorama Use previous work Tackle in this research CMVS patches rough 3D model CMVS patches rough 3D model Multi view Stereo System of Constructing Virtual Space

Structure from Motion (SfM) Estimate camera parameters (projection matrix) from multiple photos – we use Bundler[6] camera position photos points clout and camera positioncamera position [6] Snavely et al. 2006, Photo tourism: exploring photo collections in 3D 11

Multi view Stereo Reconstruct 3D geometry – we use CMVS[7] and Poisson Surface Reconstruction[8] – get a point cloud (patches) and rough 3D model photos and translate matrix patches and rough 3D model [7] Furukawa et al. 2010, Towards internet-scale multi-view stereo [8] Kazhdan et al. 2006, Poisson surface reconstruction 12

Create Depth Map Deal with holes and outliers of the point cloud Using an assumption that the real world is constructed by a planar surface – reconstruct surface from projected patches – Vertical surface can be almost reconstruct raw image segmented imagedepth map project patches

Create Panorama Image To show a scene in an omnidirectional display, we create panorama images – we use Microsft ICE[9] – canonicalize direction of panorama image from camera rotation panorama image and depth map [9] MicrosoftCorporation, Microsoft image composite editor us/um/redmond/groups/ ivm/ice.html.

Interpolation project patches to use as feature point To move freely, we create interpolated images between near panorama images correctly move direction and distance about object two raw panorama images about 1-2m away from each other find corresponding point interpolate by morphing (medium point between raw images)

Demo 16

Processing Time We experimented with 3 spaces Most of the processing time is SfM – We can drastically improve if we use [10] Each shooting times are about one hour [10] Agarwal et al.2009, Building rome in a day

Conclusion – create a system to automatically construct virtual spaces for HAI – unify various methods to create the system Future work – expand virtual spaces – research how natural and useful it for HAI – observe HAI and feed back to the real world 18

19