Acquiring 3D Indoor Environments with Variability and Repetition Young Min Kim Stanford University Niloy J. Mitra UCL/ KAUST Dong-Ming Yan KAUST Leonidas.

Slides:



Advertisements
Similar presentations
Indoor Segmentation and Support Inference from RGBD Images Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus.
Advertisements

A Search-Classify Approach for Cluttered Indoor Scene Understanding Liangliang Nan 1, Ke Xie 1, Andrei Sharf 2 1 SIAT, China 2 Ben Gurion University, Israel.
RGB-D object recognition and localization with clutter and occlusions Federico Tombari, Samuele Salti, Luigi Di Stefano Computer Vision Lab – University.
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
SPONSORED BY SA2014.SIGGRAPH.ORG Annotating RGBD Images of Indoor Scenes Yu-Shiang Wong and Hung-Kuo Chu National Tsing Hua University CGV LAB.
Partial and Approximate Symmetry Detection for 3D Geometry Mark Pauly Niloy J. Mitra Leonidas J. Guibas.
Silhouette-based Object Phenotype Recognition using 3D Shape Priors Yu Chen 1 Tae-Kyun Kim 2 Roberto Cipolla 1 University of Cambridge, Cambridge, UK 1.
Robust Global Registration Natasha Gelfand Niloy Mitra Leonidas Guibas Helmut Pottmann.
Object Recognition & Model Based Tracking © Danica Kragic Tracking system.
Real-Time Camera-Based Character Recognition Free from Layout Constraints M. Iwamura, T. Tsuji, A. Horimatsu, and K. Kise.
Cambridge, Massachusetts Pose Estimation in Heavy Clutter using a Multi-Flash Camera Ming-Yu Liu, Oncel Tuzel, Ashok Veeraraghavan, Rama Chellappa, Amit.
A Global Linear Method for Camera Pose Registration
Structure Recovery by Part Assembly Chao-Hui Shen 1 Hongbo Fu 2 Kang Chen 1 Shi-Min Hu 1 1 Tsinghua University 2 City University of Hong Kong.
Proximity Computations between Noisy Point Clouds using Robust Classification 1 Jia Pan, 2 Sachin Chitta, 1 Dinesh Manocha 1 UNC Chapel Hill 2 Willow Garage.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Fast C-obstacle Query Computation for Motion Planning Liang-Jun Zhang 12/13/2005 Liang-Jun Zhang 1 Young.
GATE D Object Representations (GATE-540) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies & General Manager SimBT.
Example Based 3D Shape Completion Mark Pauly 1,2, Niloy J. Mitra 1, Joachim Giesen 2, Markus Gross 2, Leonidas J. Guibas 1 1 Stanford University 2 ETH,
Uncertainty and Variability in Point Cloud Surface Data Mark Pauly 1,2, Niloy J. Mitra 1, Leonidas J. Guibas 1 1 Stanford University 2 ETH, Zurich.
A Study of Approaches for Object Recognition
Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry.
Matching and Recognition in 3D. Moving from 2D to 3D Some things harderSome things harder – Rigid transform has 6 degrees of freedom vs. 3 – No natural.
Recovering Articulated Object Models from 3D Range Data Dragomir Anguelov Daphne Koller Hoi-Cheung Pang Praveen Srinivasan Sebastian Thrun Computer Science.
Michael Arbib & Laurent Itti: CS664 – USC, spring Lecture 6: Object Recognition 1 CS664, USC, Spring 2002 Lecture 6. Object Recognition Reading Assignments:
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Abstraction of Man-Made Shapes Ravish Mehra 1,2, Qingnan Zhou 1, Jeremy Long 4, Alla Sheffer 1, Amy Gooch 4, Niloy J. Mitra 2,3 1 Univ. of British Columbia.
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Niloy J. Mitra Leonidas J. Guibas Mark Pauly TU Vienna Stanford University ETH Zurich SIGGRAPH 2007.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
3D SLAM for Omni-directional Camera
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
Image-based Plant Modeling Zeng Lanling Mar 19, 2008.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Avoiding Segmentation in Multi-digit Numeral String Recognition by Combining Single and Two-digit Classifiers Trained without Negative Examples Dan Ciresan.
SPONSORED BY Data-driven Segmentation and Labeling of Freehand Sketches Zhe Huang, Hongbo Fu, Rynson W.H. Lau City University of Hong Kong.
Class-Specific, Top-Down Segmentation Learning to Segment Combining Top-Down and Bottom-Up Segmentation Opposition Heather Dunlop April 3, 2006.
Ground Truth Free Evaluation of Segment Based Maps Rolf Lakaemper Temple University, Philadelphia,PA,USA.
Computer Vision Michael Isard and Dimitris Metaxas.
1 Implicit Visibility and Antiradiance for Interactive Global Illumination Carsten Dachsbacher 1, Marc Stamminger 2, George Drettakis 1, Frédo Durand 3.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
The 18th Meeting on Image Recognition and Understanding 2015/7/29 Depth Image Enhancement Using Local Tangent Plane Approximations Kiyoshi MatsuoYoshimitsu.
Peter Henry1, Michael Krainin1, Evan Herbst1,
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Temporally Coherent Completion of Dynamic Shapes AUTHORS:HAO LI,LINJIE LUO,DANIEL VLASIC PIETER PEERS,JOVAN POPOVIC,MARK PAULY,SZYMON RUSINKIEWICZ Presenter:Zoomin(Zhuming)
Sponsored by Deformation-Driven Topology-Varying 3D Shape Correspondence Ibraheem Alhashim Kai Xu Yixin Zhuang Junjie Cao Patricio Simari Hao Zhang Presenter:
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 6: Applying backpropagation to shape recognition Geoffrey Hinton.
SA2014.SIGGRAPH.ORG SPONSORED BY Automatic Semantic Modeling of Indoor Scenes from Low-quality RGB-D Data using Contextual Information Kang Chen 1 Yu-Kun.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
DISCRIMINATIVELY TRAINED DENSE SURFACE NORMAL ESTIMATION ANDREW SHARP.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Presenter: Jae Sung Park
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
Shape2Pose: Human Centric Shape Analysis CMPT888 Vladimir G. Kim Siddhartha Chaudhuri Leonidas Guibas Thomas Funkhouser Stanford University Princeton University.
Design and Calibration of a Multi-View TOF Sensor Fusion System Young Min Kim, Derek Chan, Christian Theobalt, Sebastian Thrun Stanford University.
Dynamic Geometry Registration Maks Ovsjanikov Leonidas Guibas Natasha GelfandNiloy J. Mitra Simon Flöry Helmut Pottmann Dynamic Geometry Registration.
A Plane-Based Approach to Mondrian Stereo Matching
Recognizing Deformable Shapes
A Unified Algebraic Approach to 2D and 3D Motion Segmentation
INDOOR 3D MODEL RECONSTRUCTION TO SUPPORT
Features Readings All is Vanity, by C. Allan Gilbert,
Vehicle Segmentation and Tracking from a Low-Angle Off-Axis Camera
Noah Snavely.
Robo sapiens = robo prospectus
Lecture 15: Structure from motion
Presentation transcript:

Acquiring 3D Indoor Environments with Variability and Repetition Young Min Kim Stanford University Niloy J. Mitra UCL/ KAUST Dong-Ming Yan KAUST Leonidas Guibas Stanford University 1

Data Acquisition via Microsoft Kinect Raw data:  Noisy point clouds  Unsegmented  Occlusion issues Our tool: Microsoft Kinect  Real-time  Provides depth and color  Small and inexpensive 2

Dealing with Pointcloud Data Object-level reconstruction Scene-level reconstruction [Chang and Zwicker 2011] [Xiao et. al. 2012] 3

Mapping Indoor Environments Mapping outdoor environments – Roads to drive vehicles – Flat surfaces General indoor environments contain both objects and flat surfaces – Diversity of objects of interest – Objects are often cluttered – Objects deform and move Solution: Utilize semantic information 4

Nature of Indoor Environments Man-made objects can often be well- approximated by simple building blocks – Geometric primitives – Low DOF joints Many repeating elements – Chairs, desks, tables, etc. Relations between objects give good recognition cues 5

Indoor Scene Understanding with Pointcloud Data Patch-based approach Object-level understanding [Silberman et. al. 2012] [Koppula et. al. 2011] [Shao et. al. 2012][Nan et. al. 2012] 6

Comparisons [1] An Interactive Approach to Semantic Modeling of Indoor Scenes with an RGBD Camera [2] A Search-Classify Approach for Cluttered Indoor Scene Understanding [1][2]ours Prior model3D database Learned DeformationScaling Part-based scaling Learned MatchingClassifier Geometric SegmentationUser-assistedIteration DataMicrosoft KinectMantis VisionMicrosoft Kinect 7

Contributions Novel approach based on learning stage – Learning stage builds the model that is specific to the environment Build an abstract model composed of simple parts and relationship between parts – Uniquely explain possible low DOF deformation Recognition stage can quickly acquire large- scale environments – About 200ms per object 8

Approach Learning: Build a high-level model of the repeating elements Recognition: Use the model and relationship to recognize the objects translational rotational 9

Approach Learning – Build a high-level model of the repeating elements 10

Output Model: Simple, Light-Weighted Abstraction Primitives – Observable faces Connectivity – Rigid – Rotational – Translational – Attachment Relationship – Placement information contact translational rotational 11

Joint Matching and Fitting Individual segmentation – Group by similar normals Initial matching – Focus on large parts – Use size, height, relative positions – Keep consistent match Joint primitive fitting – Add joints if necessary – Incrementally complete the model 12

Approach Learning – Build a high-level model of the repeating elements 13

Approach Learning – Build a high-level model of the repeating elements Recognition – Use the model and relationship to recognize the objects 14

Hierarchy Ground plane and desk Objects – Isolated clusters Parts – Group by normals The segmentation is approximate and to be corrected later 15

Bottom-Up Approach Initial assignment for parts vs. primitives – Simple comparison of height, normal, size – Robust to deformation – Low false-negatives Refined assignment for objects vs. models – Iteratively solve for position, deformation and segmentation – Low false-positives parts 16

Bottom-Up Approach Initial assignment for parts vs. primitive nodes Refined assignment for objects vs. models Input points Initial objects Models matched Refined objects objectspartsmatched 17

Results Data available: door/paper_docs/data_learning.zip door/paper_docs/data_recognition.zip 18

Synthetic Scene Recognition speed: about 200ms per object 19

Synthetic Scene 20

Synthetic Scene 21

Different pair Similar pair 22

Different pair Similar pair 23

24

Office 1 trash bin 4 chairs 2 monitors 2 whiteboards 25

Office 2 26

Office 3 27

Deformations drawer deformations monitorlaptop missed monitor chair 28

Auditorium 1 Open table 29

Auditorium 2 Open table Open chairs 30

Seminar Room 1 missed chairs 31

Seminar Room 2 missed chairs 32

Limitations Missing data – Occlusion, material, … Error in initial segmentation – Cluttered objects are merged as a single segment – View-point sometimes separate single object into pieces 33

Conclusion We present a system that can recognize repeating objects in cluttered 3D indoor environments. We used purely geometric approach based on learned attributes and deformation modes. The recognized objects provide high-level scene understanding and can be replaced with high-quality CAD models for visualization (as shown in the previous talks!) 34

Thank You Qualcomm Corporation Max Planck Center for Visual Computing and Communications NSF grants and a KAUST AEA grant Marie Curie Career Integration Grant Stanford Bio-X travel Subsidy 35