Automated Reconstruction of Industrial Sites Frank van den Heuvel Tahir Rabbani
Overview Introduction Automation: how does it work? Sample project off-shore platform Accuracy Future Conclusions
The group Photogrammetry & Remote Sensing “Development of efficient techniques for the acquisition of 3D information by computer-assisted analysis of image and range data“
The project Services and Training through Augmented Reality (STAR) EU fifth framework – IST programme “Develop new Augmented Reality techniques for training, on-line documentation, maintenance and planning purposes in industrial applications” AR-example: virtual human in video
The project Services and Training through Augmented Reality (STAR) Partners: Siemens, KULeuven, EPFL, UNIGE, Realviz TUDelft: “Automated 3D reconstruction of industrial installations from laser and image data”
Automated reconstruction procedure Overview (1/3) Segmentation Grouping points of surface patches
Automated reconstruction procedure Overview (2/3) Segmentation Grouping points of surface patches Object Detection Finding planes and cylinders
Automated reconstruction procedure Overview (3/3) Segmentation Grouping points of surface patches Object Detection Finding planes and cylinders Fitting Final parameter estimation
Segmentation – step 1 Estimation of surface normals using K-nearest neighbours (here K=10 points)
Segmentation – step 2 Region growing using: Connectivity (K-nearest neighbours) Surface smoothness (angle between normals)
Detection – Planes Plane detection using Hough transform Find orientation as maximum on Gaussian sphere
Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation
Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation
Detection – Cylinders Cylinder detection using Hough transform in 2 steps: Step 1: Orientation (2 parameters) Step 2: Position and Radius (3 parameters) u,v search space at correct Radius
Example: detection of two cylinders Point cloud segment
Example: detection of two cylinders Surface normals
Example: detection of two cylinders Normals on Gaussian sphere
Example: detection of two cylinders Orientation of first cylinder (next: position)
Example: detection of two cylinders Remove first cylinder points from segment
Example: detection of two cylinders Procedure repeated for second cylinder
Example: detection of two cylinders Result: two detected cylinders
Fitting Complete CSG model + constraint specification Final least-squares parameter estimation of CSG model
Fitting Final least-squares parameter estimation of CSG model Minimise sum of squared distances Enforce constraints
Results on platform modelling Scanned by Delftech in 2003 Subset of 17.7 million points used by TUD: Automated detection of 2338 objects R.M.S. of residuals 4.3 mm
Results on platform modelling
Results on platform modelling Statistics Points:17.7 million Points in segments:14.2 million(80%) Points on objects:9.3 million(53%) Detected: Planar patches: 946 Cylinders: 1392 Data reduction: Object parameters Mb to 0.1 Mb
Results on platform modelling Accuracy Residual analysis: RMS: 4.3 mm 83% < 5 mm 96% < 10 mm
Accuracy Data precision: Scanner:6 mm (averaging: 3 mm) Scanner dependent Model precision: Discrepancies models - real world: mm ? Limited production accuracy Deformations Imperfections in segmentation
Accuracy Object deformation or segmentation limitations? Fitting after initial segmentation Max.residual 21 mm Fitting after rejecting large residuals Max. residual 9 mm
Future – automation Reconstruction using laser data: Segmentation, primitive detection (available) Correspondence between primitives > registration Model improvement: Constraint detection (piping structure) Recognition of complex elements in a database Integration with digital imagery
Future – integration with imagery Instrumentation developments Scanners with integrated high-resolution digital camera Accuracy improvement Imagery complementary: Laser for surfaces, image for edges Integrated fitting of models to laser and image data
Future – integration with imagery Instrumentation developments Scanners with integrated high-resolution camera Accuracy improvement Imagery complementary: Laser for surfaces, image for edges Integrated fitting of models to laser and image data Flexibility of image acquisition: Completeness Non-geometric information (What is there?)What is there
Future – integration with imagery
Conclusions Bright future for automation using laser data More research to be done: Automated registration Integration with digital imagery Using domain knowledge for automated modelling: Closer connection to the model users needed: Domain knowledge for automation Utilisation of research results