Automated Reconstruction of Industrial Sites Frank van den Heuvel Tahir Rabbani.

Slides:



Advertisements
Similar presentations
Exploiting Homography in Camera-Projector Systems Tal Blum Jiazhi Ou Dec 11, 2003 [Sukthankar, Stockton & Mullin. ICCV-2001]
Advertisements

Centre for Integrated Petroleum Research University of Bergen & Unifob, Norway Walter Wheeler & Simon Buckley Unifob, UiB Bergen, Norway Lidar laser scanning.
VisHap: Guangqi Ye, Jason J. Corso, Gregory D. Hager, Allison M. Okamura Presented By: Adelle C. Knight Augmented Reality Combining Haptics and Vision.
5/13/2015CAM Talk G.Kamberova Computer Vision Introduction Gerda Kamberova Department of Computer Science Hofstra University.
Ping Gallivan Xiang Gao Eric Heinen Akarsh Sakalaspur Automated Coin Grader.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Vision Based Control Motion Matt Baker Kevin VanDyke.
Fast and Extensible Building Modeling from Airborne LiDAR Data Qian-Yi Zhou Ulrich Neumann University of Southern California.
Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Edge Detection CSE P 576 Larry Zitnick
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Workshop on Earth Observation for Urban Planning and Management, 20 th November 2006, HK 1 Zhilin Li & Kourosh Khoshelham Dept of Land Surveying & Geo-Informatics.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Distinguishing Photographic Images and Photorealistic Computer Graphics Using Visual Vocabulary on Local Image Edges Rong Zhang,Rand-Ding Wang, and Tian-Tsong.
Computing motion between images
2007Theo Schouten1 Introduction. 2007Theo Schouten2 Human Eye Cones, Rods Reaction time: 0.1 sec (enough for transferring 100 nerve.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
S D Laser Scanning of Acropolis of ATHENS. 3D scanning of the Wall and the Rock of Acropolis Athens and 3D model creation.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
11 July 2002 Reverse Engineering 1 Dr. Gábor Renner Geometric Modelling Laboratory, Computer and Automation Research Institute.
Point Set Processing and Surface Reconstruction (
Course 13 Curves and Surfaces. Course 13 Curves and Surface Surface Representation Representation Interpolation Approximation Surface Segmentation.
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Product Associated Displays and SearchLight – New Developments Based on the Fluid Beam Application.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Computer Vision and Digital Photogrammetry Methodologies for Extracting Information and Knowledge from Remotely Sensed Data Toni Schenk, CEEGS Department,
Advanced Science and Technology Letters Vol.67 (Multimedia 2014), pp Automatic Pipeline Generation by.
Hough Transform Procedure to find a shape in an image Shape can be described in parametric form Shapes in image correspond to a family of parametric solutions.
CSE 185 Introduction to Computer Vision Feature Matching.
Poselets: Body Part Detectors Trained Using 3D Human Pose Annotations ZUO ZHEN 27 SEP 2011.
INTERNATIONAL INSTITUTE FOR GEO-INFORMATION SCIENCE AND EARTH OBSERVATION Adding the Third Dimension to a Topographic Database Using Airborne Laser Scanner.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
Frank Bergschneider February 21, 2014 Presented to National Instruments.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
CIVET seminar Presentation day: Presenter : Park, GilSoon.
SIFT.
776 Computer Vision Jan-Michael Frahm Spring 2012.
SIFT Scale-Invariant Feature Transform David Lowe
Recent developments on micro-triangulation
Fitting: Voting and the Hough Transform
Semi-Global Matching with self-adjusting penalties
EE465: Introduction to Digital Image Processing Copyright Xin Li'2003
Range Image Segmentation for Modeling and Object Detection in Urban Scenes Cecilia Chen & Ioannis Stamos Computer Science Department Graduate Center, Hunter.
Image Primitives and Correspondence
Vehicle Segmentation and Tracking in the Presence of Occlusions
Application Solution: 3D Inspection Automation with SA
Automatic cylinder detection using Hough Transform.
Video Compass Jana Kosecka and Wei Zhang George Mason University
SIFT.
Hough Transform.
Point-Cloud 3D Modeling.
CENG 789 – Digital Geometry Processing 11- Least-Squares Solutions
Introduction to Sensor Interpretation
CSE 185 Introduction to Computer Vision
Methods for Fitting CSG Models to Point Clouds and their Comparison
Introduction to Sensor Interpretation
Presentation transcript:

Automated Reconstruction of Industrial Sites Frank van den Heuvel Tahir Rabbani

Overview Introduction Automation: how does it work? Sample project off-shore platform Accuracy Future Conclusions

The group Photogrammetry & Remote Sensing “Development of efficient techniques for the acquisition of 3D information by computer-assisted analysis of image and range data“

The project Services and Training through Augmented Reality (STAR) EU fifth framework – IST programme “Develop new Augmented Reality techniques for training, on-line documentation, maintenance and planning purposes in industrial applications” AR-example: virtual human in video

The project Services and Training through Augmented Reality (STAR) Partners: Siemens, KULeuven, EPFL, UNIGE, Realviz TUDelft: “Automated 3D reconstruction of industrial installations from laser and image data”

Automated reconstruction procedure Overview (1/3) Segmentation Grouping points of surface patches

Automated reconstruction procedure Overview (2/3) Segmentation Grouping points of surface patches Object Detection Finding planes and cylinders

Automated reconstruction procedure Overview (3/3) Segmentation Grouping points of surface patches Object Detection Finding planes and cylinders Fitting Final parameter estimation

Segmentation – step 1 Estimation of surface normals using K-nearest neighbours (here K=10 points)

Segmentation – step 2 Region growing using: Connectivity (K-nearest neighbours) Surface smoothness (angle between normals)

Detection – Planes Plane detection using Hough transform Find orientation as maximum on Gaussian sphere

Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation

Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation

Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation (2 parameters) Step 2: Position and Radius (3 parameters) u,v search space at correct Radius

Example: detection of two cylinders Point cloud segment

Example: detection of two cylinders Surface normals

Example: detection of two cylinders Normals on Gaussian sphere

Example: detection of two cylinders Orientation of first cylinder (next: position)

Example: detection of two cylinders Remove first cylinder points from segment

Example: detection of two cylinders Procedure repeated for second cylinder

Example: detection of two cylinders Result: two detected cylinders

Fitting Complete CSG model + constraint specification Final least-squares parameter estimation of CSG model

Fitting Final least-squares parameter estimation of CSG model Minimise sum of squared distances Enforce constraints

Results on platform modelling Scanned by Delftech in 2003 Subset of 17.7 million points used by TUD: Automated detection of 2338 objects R.M.S. of residuals 4.3 mm

Results on platform modelling

Results on platform modelling Statistics Points:17.7 million Points in segments:14.2 million(80%) Points on objects:9.3 million(53%) Detected: Planar patches: 946 Cylinders: 1392 Data reduction: Object parameters Mb to 0.1 Mb

Results on platform modelling Accuracy Residual analysis: RMS: 4.3 mm 83% < 5 mm 96% < 10 mm

Accuracy Data precision: Scanner:6 mm (averaging: 3 mm) Scanner dependent Model precision: Discrepancies models - real world: mm ? Limited production accuracy Deformations Imperfections in segmentation

Accuracy Object deformation or segmentation limitations? Fitting after initial segmentation Max.residual 21 mm Fitting after rejecting large residuals Max. residual 9 mm

Future – automation Reconstruction using laser data: Segmentation, primitive detection (available) Correspondence between primitives > registration Model improvement: Constraint detection (piping structure) Recognition of complex elements in a database Integration with digital imagery

Future – integration with imagery Instrumentation developments Scanners with integrated high-resolution digital camera Accuracy improvement Imagery complementary: Laser for surfaces, image for edges Integrated fitting of models to laser and image data

Future – integration with imagery Instrumentation developments Scanners with integrated high-resolution camera Accuracy improvement Imagery complementary: Laser for surfaces, image for edges Integrated fitting of models to laser and image data Flexibility of image acquisition: Completeness Non-geometric information (What is there?)What is there

Future – integration with imagery

Conclusions Bright future for automation using laser data More research to be done: Automated registration Integration with digital imagery Using domain knowledge for automated modelling: Closer connection to the model users needed: Domain knowledge for automation Utilisation of research results