Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

DEVELOPMENT OF A COMPUTER PLATFORM FOR OBJECT 3D RECONSTRUCTION USING COMPUTER VISION TECHNIQUES Teresa C. S. Azevedo João Manuel R. S. Tavares Mário A.
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
The fundamental matrix F
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Recent work in image-based rendering from unstructured image collections and remaining challenges Sudipta N. Sinha Microsoft Research, Redmond, USA.
Patch to the Future: Unsupervised Visual Prediction
Discrete-Continuous Optimization for Large-scale Structure from Motion David Crandall, Andrew Owens, Noah Snavely, Dan Huttenlocher Presented by: Rahul.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Reconstructing Building Interiors from Images Yasutaka Furukawa Brian Curless Steven M. Seitz University of Washington, Seattle, USA Richard Szeliski Microsoft.
Reconstructing Building Interiors from Images Yasutaka Furukawa Brian Curless Steven M. Seitz University of Washington, Seattle, USA 2011/01/16 蔡禹婷.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Multi-view stereo Many slides adapted from S. Seitz.
High-Quality Video View Interpolation
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Image Stitching and Panoramas
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Manhattan-world Stereo Y. Furukawa, B. Curless, S. M. Seitz, and R. Szeliski 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Mosaics CSE 455, Winter 2010 February 8, 2010 Neel Joshi, CSE 455, Winter Announcements  The Midterm went out Friday  See to the class.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Peter Sturm INRIA Grenoble – Rhône-Alpes (Institut National de Recherche en Informatique et Automatique) An overview of multi-view stereo and other topics.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Image-based rendering Michael F. Cohen Microsoft Research.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
I 3D: Interactive Planar Reconstruction of Objects and Scenes Adarsh KowdleYao-Jen Chang Tsuhan Chen School of Electrical and Computer Engineering Cornell.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
Metrology 1.Perspective distortion. 2.Depth is lost.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
Multi-View Stereo : A Parametric Study Number of images, their angular separation and the like.
Asian Institute of Technology
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Senior Project Poster Day 2005, CIS Dept. University of Pennsylvania Surface Reconstruction from Feature Based Stereo Mickey Mouse, Donald Duck Faculty.
IIIT HYDERABAD Image-based walkthroughs from partial and incremental scene reconstructions Kumar Srijan Syed Ahsan Ishtiaque C. V. Jawahar Center for Visual.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Temporally Coherent Completion of Dynamic Shapes AUTHORS:HAO LI,LINJIE LUO,DANIEL VLASIC PIETER PEERS,JOVAN POPOVIC,MARK PAULY,SZYMON RUSINKIEWICZ Presenter:Zoomin(Zhuming)
Scene Reconstruction Seminar presented by Anton Jigalin Advanced Topics in Computer Vision ( )
Feature Matching. Feature Space Outlier Rejection.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Paper presentation topics 2. More on feature detection and descriptors 3. Shape and Matching 4. Indexing and Retrieval 5. More on 3D reconstruction 1.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Announcements No midterm Project 3 will be done in pairs same partners as for project 2.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Sub-Surface Scattering Real-time Rendering Sub-Surface Scattering CSE 781 Prof. Roger Crawfis.
Lecture 22: Structure from motion CS6670: Computer Vision Noah Snavely.
Center for Machine Perception Department of Cybernetics Faculty of Electrical Engineering Czech Technical University in Prague Segmentation Based Multi-View.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
COSC579: Image Align, Mosaic, Stitch
Segmentation Based Environment Modeling Using a Single Image
Real Time Dense 3D Reconstructions: KinectFusion (2011) and Fusion4D (2016) Eleanor Tursman.
© 2005 University of Wisconsin
Structure from motion Input: Output: (Tomasi and Kanade)
Noah Snavely.
Structure from motion Input: Output: (Tomasi and Kanade)
Lecture 15: Structure from motion
Presentation transcript:

Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC /11/09

Abstract We automatically construct immersive virtual spaces for human agent interaction – Scenes are drawn by outside photo images – Depth maps are reconstructed to express occlusion – Rough 3D models are added for agent – Processing time is about 4.7 days to reconstruct a 20m×20m virtual space

Introduction We want to observe HHI using HAI in a virtual space For example, sightseeing task: – we can select faraway place such as foreign country – we can easily prepare environment to observe Our Goal: creating system to construct environment to use such a task

Introduction To do sightseeing task and observe interaction, environment should be looked like the real world – virtual spaces should be immersive – outside scenes created by real world photos is needed – spatial relationship between agent and object should be correct – users walk freely on some level How to construct such a virtual space?

Related Work Model Based Rendering (MBR) – Reconstruct 3D models – Weak at trees or texture-less surfaces [1-3] are nice methods but, – [1] can’t use outside because of a constraint of axis-aligned surface – [2,3] use high expensive equipments or use a lot of time and effort [1] Furukawa et al. 2010, Reconstructing build-ing interiors from images [2] Pollefeys et al. 2008, Detailed real-time urban 3d reconstruction from video [3] Ikeuchi et al, 2004, Bayon digital archival project

Related Work Image Based Rendering (IBR) – draw clearly complex structure such as natural object – Weak at occlusion [4-5] have good image quality but, – They don’t consider agents – Movable space is restricted [4]Google Street View [5] Ibuki, 2009, Reduction of Unnatural Feeling in Free-viewpoint Rendering Using View-Dependent Deformable 3-D Mesh Model (Japanese)

Our Method To make up immersive environment, we use IBR – because MBR has hole and low resolution for a task – use panorama images and omnidirectional display to show environment

Our Method To collect photo images – divide a space in into a 1-2m grid – shoot about 18 photos in each grid We use interpolation when move from one shooting point to another obstacle shooting point shooting direction 1-2 meter

Our Method 3D geometry is needed for agents – use Structure from Motion and stereo method in a similar way [1,5] – create depth map for occlusion between objects and agents This information is used for better IBR – camera position & rotation – 3D position of a point cloud

System Pipeline depth map segmented image camera parameter Photos Structure from Motion Segmentation Creating Depth Map Show a Immersive Virtual Space interpolated image Interpolation : Input : Process : Output panorama image panorama depth map panorama image panorama depth map Creating Panorama Use previous work Tackle in this research CMVS Patches Rough 3D model CMVS Patches Rough 3D model Multi view Stereo System of Constructing Virtual Space

Structure from Motion (SfM) Estimate camera parameter (translate matrix) from multi photos – we use Bundler[6] – it is robust and accurate camera position photos points clout and camera positioncamera position [6] Snavely et al. 2006, Photo tourism: exploring photo collections in 3D

Multi view Stereo Reconstruct 3D geometry – we use CMVS[7] and Poisson Surface Reconstruction[8] – get a point cloud (patches) and rough 3D model photos and translate matrix patches and rough 3D model [7] Furukawa et al. 2010, Towards internet-scale multi-view stereo [8] Kazhdan et al. 2006, Poisson surface reconstruction

Create Depth Map Deal with holes and outliers Using an assumption that the real world is constructed by a planar surface – assume two points as same planar surface if they are segmented to same area – reconstruct surface from projected patches raw image segmented imagedepth map project patches

Create Panorama Image To show a scene to an omnidirectional display, we create panorama images – we use Microsft ICE[9] – canonicalize direction of panorama image from camera rotation panorama image and depth map [9] MicrosoftCorporation, Microsoft image composite editor us/um/redmond/groups/ ivm/ice.html.

Interpolation project patches to use as feature point To move freely, we create interpolated image between near panorama images correct move direction and distance about object two raw panorama images about 1-2m away from each other find corresponding point interpolate by morphing (medium point between raw images)

Demo

Processing time We experimented with 3 spaces Most of processing time is SfM – We can drastically improve if we use [10] Each shooting times are about one hour [10] Agarwal et al.2009, Building rome in a day

Conclusion – create a system to automatically construct virtual spaces for HAI – unify various methods to create a system Future work – expand virtual spaces – research how natural and useful it for HAI