Download presentation
Presentation is loading. Please wait.
Published byErica Reynolds Modified over 9 years ago
1
Extracting Objects from Range and Radiance Images Computer Science Division University of California at Berkeley Computer Science Division University of California at Berkeley Yizhou Yu Andras Ferencz Jitendra Malik
2
Image-based Modeling and Rendering –Vary viewpoint –Vary lighting –Vary scene configuration –Vary viewpoint –Vary lighting –Vary scene configuration Recover Models of Real World Scenes and Make Possible Various Visual Interactions
3
Image-based Modeling and Rendering 1st Generation---- vary viewpoint but not lighting –Recover geometry ( explicit or implicit ) –Acquire photographs –Facade, Plenoptic Modeling, View Morphing, Lumigraph, Layered Depth Images, Concentric Mosaics, Light Field Rendering etc. 1st Generation---- vary viewpoint but not lighting –Recover geometry ( explicit or implicit ) –Acquire photographs –Facade, Plenoptic Modeling, View Morphing, Lumigraph, Layered Depth Images, Concentric Mosaics, Light Field Rendering etc.
4
Image-based Modeling and Rendering 2nd Generation---- vary viewpoint and lighting –Recover geometry & reflectance properties –Render using light transport simulation or local shading –[ Yu, Debevec, Malik & Hawkins 99 ], [ Yu & Malik 98 ], [ Sato, Wheeler & Ikeuchi 97 ] 2nd Generation---- vary viewpoint and lighting –Recover geometry & reflectance properties –Render using light transport simulation or local shading –[ Yu, Debevec, Malik & Hawkins 99 ], [ Yu & Malik 98 ], [ Sato, Wheeler & Ikeuchi 97 ] Original Lighting & ViewpointNovel Lighting & Viewpoint
5
Image-based Modeling and Rendering 3 rd Generation--Vary spatial configurations in addition to viewpoint and lighting Novel ViewpointNovel Viewpoint & Configuration
6
Our Framework Input –Multiple range scans of a scene –Multiple photographs of the same scene Input –Multiple range scans of a scene –Multiple photographs of the same scene Output –Geometric meshes of each object in the scene –Registered texture maps for objects
7
Overview & Video Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
8
OutlineOutline Overview Range Data Segmentation Radiance Image Registration Meshing and Texture-Mapping Results Future Work Overview Range Data Segmentation Radiance Image Registration Meshing and Texture-Mapping Results Future Work
9
OverviewOverview Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
10
Range Image Registration Registration with Calibration Objects –Cyra Technologies, Inc. Registration without Calibration Objects –[ Besl & McKay 92 ], [ Pulli 99 ] Registration with Calibration Objects –Cyra Technologies, Inc. Registration without Calibration Objects –[ Besl & McKay 92 ], [ Pulli 99 ]
11
OverviewOverview Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
12
Previous Work on Range Image Segmentation Local region growing with surface primitives –[ Hoffman & Jain 87 ], [ Besl & Jain 88 ], [ Newman Flynn & Jain 93 ], [ Leonardis, Gupta & Bajcsy 95 ], [ Hoover et. al. 96 ] They don’t address general free-form shapes and albedo variations. Local region growing is suboptimal in finding object boundaries. Local region growing with surface primitives –[ Hoffman & Jain 87 ], [ Besl & Jain 88 ], [ Newman Flynn & Jain 93 ], [ Leonardis, Gupta & Bajcsy 95 ], [ Hoover et. al. 96 ] They don’t address general free-form shapes and albedo variations. Local region growing is suboptimal in finding object boundaries.
13
Normalized Cut Framework [ Shi & Malik 97 ], [ Malik, Belongie, Shi & Leung 99 ] Recursive Binary Graph Partition by Minimizing Approximate Solution
14
From 2D Image To 3D Point Cloud Neighborhood –3D spherical region Complexity –E.g. nineteen 800 by 800 scans –Clustering Neighborhood –3D spherical region Complexity –E.g. nineteen 800 by 800 scans –Clustering
15
Point Cloud Segmentation: Cues Normal Orientation Returned Laser Intensity Proximity
16
Point Cloud Segmentation: Graph Setup Nodes: Clusters Edges: Local + Random Long-range Connections Weights:
17
Point Cloud Segmentation: Criterion Use Normalized Cut Criterion to Propose Cut Use Normalized Weighted Average Cut to Accept Cut
18
Point Cloud Segmentation: Algorithm Coarse Segmentation –Clustering –Cluster Segmentation Recursive segmentation based on normal continuity and proximity Recursive segmentation based on continuity in laser intensity and proximity Fine Segmentation –re-clustering and segmentation on each group from coarse segmentation Coarse Segmentation –Clustering –Cluster Segmentation Recursive segmentation based on normal continuity and proximity Recursive segmentation based on continuity in laser intensity and proximity Fine Segmentation –re-clustering and segmentation on each group from coarse segmentation
19
Segmentation Results
20
OverviewOverview Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
21
Previous Work on Pose Estimation Mathematical Background –from 3 or more points [Hung et.al. ‘85], [Haralick et.al. ‘89], Overview [Haralick et.al. ‘94] –from points, lines and ellipse-circle pairs [Qiang et.al. ‘99] Automatic Techniques –Combinatorial search on automatically detected features [Huttenlocher and Ulman ‘90], [Wunsch and Hirzinger ‘96] –Hybrid approach (user initialization with finding silhouettes) [Neugebauer and Klein ‘99] Mathematical Background –from 3 or more points [Hung et.al. ‘85], [Haralick et.al. ‘89], Overview [Haralick et.al. ‘94] –from points, lines and ellipse-circle pairs [Qiang et.al. ‘99] Automatic Techniques –Combinatorial search on automatically detected features [Huttenlocher and Ulman ‘90], [Wunsch and Hirzinger ‘96] –Hybrid approach (user initialization with finding silhouettes) [Neugebauer and Klein ‘99]
22
Radiance Image Registration Requirements –Automatic: many (50 - 200) images need to be registered –Accurate: Registration must be accurate to within one or two pixels for texture mapping –General purpose: scene may be complicated, possibly without easily and uniquely identifiable features Solution –Place registration targets in scene Requirements –Automatic: many (50 - 200) images need to be registered –Accurate: Registration must be accurate to within one or two pixels for texture mapping –General purpose: scene may be complicated, possibly without easily and uniquely identifiable features Solution –Place registration targets in scene
23
Finding Targets in Images Pattern matching followed by ellipse fitting
24
Combinatorial Search Brute force –4 correspondences each image, O(n 4 ) time n targets Alternative –Use fitted ellipse parameters in addition to position to estimate pose from 2 target matches, O(n 2 ) time 20 seconds for scene with 100 targets Brute force –4 correspondences each image, O(n 4 ) time n targets Alternative –Use fitted ellipse parameters in addition to position to estimate pose from 2 target matches, O(n 2 ) time 20 seconds for scene with 100 targets 3D Targets
25
Registration Summary Place registration targets in scene before acquiring data Automatically detect targets in data Perform combinatorial search to match image targets to corresponding target in geometry Find camera pose from matched points Remove targets from images and fill in holes Place registration targets in scene before acquiring data Automatically detect targets in data Perform combinatorial search to match image targets to corresponding target in geometry Find camera pose from matched points Remove targets from images and fill in holes
26
Camera Pose Results Accuracy: consistently within 2 pixels Correctness: correct pose for 58 out of 62 images Accuracy: consistently within 2 pixels Correctness: correct pose for 58 out of 62 images
27
OverviewOverview Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
28
Mesh Reconstruction and Simplification Meshing –Use the “crust” algorithm, [ Amenta, Bern & Kamvysselis 98 ], for coarse geometry –Use nearest-neighbor connections for detailed geometry –Possible to use volumetric techniques, [ Curless & Levoy 96 ], to merge meshes Simplification –Use quadric error metric, [ Garland & Heckbert 97 ] Meshing –Use the “crust” algorithm, [ Amenta, Bern & Kamvysselis 98 ], for coarse geometry –Use nearest-neighbor connections for detailed geometry –Possible to use volumetric techniques, [ Curless & Levoy 96 ], to merge meshes Simplification –Use quadric error metric, [ Garland & Heckbert 97 ]
29
Reconstructed Mesh with Camera Poses and Calibration Targets
30
Models of Individual Objects
31
OverviewOverview Range Images Radiance Images Point Cloud Point Groups Meshes Simplified Meshes Calibrated Images Texture Maps Objects Registration SegmentationReconstruction Pose Estimation Texture Map Synthesis
32
Texture Map Synthesis Regular Texture-Mapping with Texture Coordinates –Compose a triangular texture patch for each triangle. –Allocate space from texture maps for each texture patch to assign texture coordinates. –The size of each texture patch is determined by the amount of variations in photographs. –The derivative of the Gaussian is used as a metric for variations. Photograph Texture Map 3D Triangle
33
Texture-Mapping and Object Manipulation
34
VideoVideo
35
Future Work Create watertight geometric models from sparse or incomplete data Improve mesh simplification techniques Texture Compression Create watertight geometric models from sparse or incomplete data Improve mesh simplification techniques Texture Compression
36
AcknowledgmentsAcknowledgments Thanks to Ben Kacyra, Mark Wheeler, Daniel Chudak, Jonathan Kung at Cyra Technologies, Inc., and Jianbo Shi at CMU. Supported by ONR BMDO, the California MICRO program, and Microsoft Graduate Fellowship. Thanks to Ben Kacyra, Mark Wheeler, Daniel Chudak, Jonathan Kung at Cyra Technologies, Inc., and Jianbo Shi at CMU. Supported by ONR BMDO, the California MICRO program, and Microsoft Graduate Fellowship.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.