Download presentation
Presentation is loading. Please wait.
1
Automated 3D Model Construction for Urban Environments Christian Frueh John Flynn Avideh Zakhor Next Generation 4D Distributed Modeling and Visualization University of California, Berkeley June 13, 2002
2
Presentation Overview Introduction Ground based modeling - Mesh processing Airborne modeling - Aerial photos - Airborne laser scans 3D Model Fusion Rendering Conclusion and Future Work
3
Introduction Goal: Generate 3D model of a city for virtual walk/drive/fly-thrus and simulations Fast Automated Photorealistic 3D model of street scenery & building façades highly detailed needed: 3D Model of terrain and buildings tops & sides coarse resolution For Fly-Thru: For Walk/Drive-Thru:
4
Introduction 3D model of building façades 3D Model of terrain and building tops Airborne Modeling Laser scans/images from plane Ground Based Modeling Laser scans & images from acquisition vehicle Complete 3D City Model Fusion
5
Airborne Modeling Available Data: Aerial Photos Airborne laser scans Acquisition of terrain shape and top-view building geometry Goal: Geometry: 2 approaches: I) stereo matching of photos Texture: from aerial photos II) airborne laser scans
6
Airborne Modeling Stereo photo pairs from city/urban areas, ~ 60% overlap Approach I : Stereo Matching Camera parameter computation, Matching, Distortion reduction, Model generation Segmentation Manual: Automated: Semi-Automatic (last year)
7
Stereo Matching Stereo pair from downtown Berkeley and the estimated disparity after removing perspective distortions
8
Stereo Matching Results Downtown Oakland
9
Airborne Modeling Scanning city from plane point cloud Resolution 1 scan point/m 2 Berkeley: 40 Million scan points Approach II: Airborne Laser Scans
10
Airborne Laser Scans Re-sampling point cloud Sorting into grid Filling holes usable for: Map-like height field Mesh Generation Monte Carlo Localization
11
Textured Mesh Generation 1. Connecting grid vertices to mesh 2. Applying Q-slim simplification 3. Texture mapping: Semi-automatic Manual selection of few correspondence points: 10 mins/entire Berkeley Automated camera pose estimation Automated computation of texture for mesh
12
Airborne Model East Berkeley campus with campanile
13
Airborne Model http://www-video.eecs.berkeley.edu/~frueh/3d/airborne/ Downtown Berkeley
14
Ground Based Modeling buildings truck x z y 2D laser v u Acquisition vehicle Truck with rack: 2 fast 2D laser scanners digital camera Acquisition of highly detailed 3D building façade models Scanning setup vertical 2D laser scanner for geometry capture horizontal scanner for pose estimation Goal:
15
Scan Matching & Initial Path Computation Horizontal laser scans: Continuously captured during vehicle motion t = t 0 t = t 1 Overlap Relative position estimation by scan-to-scan matching Scan matching Translation ( u, v) Rotation ( u, v) Adding relative steps ( u i, v i, i ) path (x i,y i, i ) ( u 1, v 1, 1 ) ( u 2, v 2, 2 ) … ( u i, v i, i ) ( u i-1, v i-1, i-1 ) 3 DOF pose (x, y, yaw)
16
6 DOF Pose Estimation From Images Scan matching cannot estimate vertical motion - Small bumps and rolls - Slopes in hill areas Full 6 DOF pose of the vehicle is important; affects: - Future processing of the 3D and intensity data - Texture mapping of the resulting 3D models Extend initial 3 DOF pose by deriving missing 3 DOF (z, pitch, roll) from images
17
6 DOF Pose Estimation From Images Central idea: photo-consistency Each 3D scan point can be projected into images using initial 3 DOF pose If pose estimate is correct, point should appear the same in all images Use discrepancies in projected position of 3D points within multiple images to solve for the full pose
18
6 DOF Pose Estimation – Algorithm 3DOF of laser as initial estimate Project scan points into both images If not consistent, use image correlation to find correct projection Ransac used for robustness
19
6 DOF Pose Estimation – Algorithm Use 3 DOF pose of laser scan matching as initial estimate Some scan points are taken simultaneously with each image, the exact projection of these scan points are know in this image Image correlation to find the projection of these scan points in other images - Not always correct results - Many outliers Reprojected error is minimized across many images to find the pose of each image RANSAC for robustness
20
6 DOF Pose Estimation – Results with 3 DOF pose with 6 DOF pose
21
6 DOF Pose Estimation – Results
22
Monte Carlo Localization (1) Previously: Global 3 DOF pose correction using aerial photography b) path after MCL correction a) path before MCL correction After correction, points fit to edges of aerial image
23
Monte Carlo Localization (2) Extend MCL to work with airborne laser data and 6 DOF pose No perspective shifts of building tops, no shadow lines Use terrain shape to estimate z coordinate of truck Fewer particles necessary, increased computation speed Significantly higher accuracy near high buildings and tree areas Correct additional DOF for vehicle pose (z, pitch, roll) Modeling not restricted to flat areas Now:
24
Monte Carlo Localization (3) Resulting corrected path overlaid with airborne laser height field Track global 3D position of vehicle to correct relative 6 DOF motion estimates
25
Path Segmentation vertical scans: 107,082 scan points: ~ 15 million 24 mins, 6769 meters Too large to process as one block! Segment path into quasi- linear pieces Cut path at curves and empty areas Remove redundant segments
26
Path Segmentation Resulting path segments overlaid with edges of airborne laser height map
27
Simple Mesh Generation
28
Triangulate Point cloud Mesh Problem: Partially captured foreground objects erroneous scan points due to glass reflection Side views look “noisy”Remove foreground: extract facades
29
Façade Extraction and Processing (1) 1. Transform path segment into depth image 2. Histogram analysis over vertical scans scan nr depth split depth depth local minimum main depth scanner split depth ground points depth value s n,υ for a scan point P n,υ main depth
30
Façade Extraction and Processing (2) 3. Separate depth image into 2 layers: background =building facades foreground =trees, cars etc.
31
Façade Extraction and Processing (3) 4. Process background layer: Fill areas occluded by foreground objects by extending geometry from boundaries Horizontal, vertical, planar interpolation, RANSAC Detect and remove invalid scan points Apply segmentation Fill remaining holes in large segments Final result: “clean” background layer Remove isolated segments
32
Façade Extraction – Examples (1) without processing with processing
33
Façade Extraction – Examples (2) with processing without processing
34
Façade Extraction – Examples (3) without processing with processing
35
Facade Processing
36
Foreground Removal
37
Mesh Generation Downtown Berkeley
38
Automatic Texture Mapping (1) Camera calibrated and synchronized with laser scanners Transformation matrix between camera image and laser scan vertices can be computed 1. Project geometry into images 2. Mark occluding foreground objects in image 3. For each background triangle: Search pictures in which triangle is not occluded, and texture with corresponding picture area
39
Automatic Texture Mapping(2) Efficient representation: texture atlas Copy texture of all triangles into “mosaic” image Typical texture reduction: factor 8..12
40
Automatic Texture Mapping (3) Large foreground objects: Some of the filled-in triangles are not visible in any image! “texture holes” in the atlas Texture synthesis: preliminary Mark holes corresponding to non- textured triangles in the atlas Search the image for areas matching the hole boundaries Fill the hole by copying missing pixels from these image
41
Automatic Texture Mapping (4) Texture holes marked Texture holes filled
42
Automatic Texture Mapping(5)
43
Ground Based Modeling - Results Façade models of downtown Berkeley
44
Ground Based Modeling - Results Façade models of downtown Berkeley
45
Model Fusion Goal: Fusion of ground based and airborne model to one single model 1.Registration of models 2.Combining the registered meshes Model Fusion: Façade model Airborne model
46
Registration of Models Models are already registered with each via Monte-Carlo-Localization ! Which model to use where?
47
Preparing Ground Based Models Intersect path segments with each other Remove degenerated, redundant triangles in overlapping areas original mesh redundant triangles removed
48
Preparing Airborne Model Remove facades in airborne model where ground based geometry is available Add ground based façades Fill remaining gaps with a “blend mesh” to hide model transitions Ground based model has 5-10 times higher resolution
49
Preparing Airborne Model Initial airborne model
50
Preparing Airborne Model Remove facades where ground based geometry is available
51
Combining Models Add ground based façade models
52
Combining Models Fill remaining gaps with a “blend mesh” to hide model transitions
53
Model Fusion - Results
54
Rendering Subdivide model and create multiple level-of-details (LOD) Generate scene graph, decide which LOD to render when Up to 270,000 triangles, 20 MB texture per path segment 4.28 million triangles, 348 MB texture for 4 downtown blocks Ground based models: Difficult to render interactively!
55
Multiple LODs for façade meshes Highest LOD Geometry: 10% Texture: 25% Lower LOD of original mesh Qslim mesh simplification Texture subsampling
56
Façade Model Subdivision for Rendering Subdivide 2 highest LODs of façade meshes along cut planes Sub- scene LOD 2 Path segment Global scene LOD 0 LOD 1 LOD 0 LOD 1 LOD 0 LOD 1 Submesh
57
Interactive Rendering Downtown blocks: Interactive rendering with web-based browser!
58
Future Work Resolution enhancement and post- processing of LIDAR data Devise new data acquisition system and algorithms to capture - both sides of street simultaneously - texture for upper parts of tall buildings Include foreground objects in model Add temporal component to dynamically update models Compact representation Interactive rendering
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.