Download presentation
Presentation is loading. Please wait.
Published byClemence Lawrence Modified over 9 years ago
1
Models for Multi-View Object Class Detection Han-Pang Chiu 1
2
Multi-View Object Class Detection 2 Training Set Test Set Multi-View Same Object Multi-View Object Class Single-View Object Class
3
The Roadblock 3 - The learning processes for each viewpoint of the same object class should be related. All existing methods for multi-view object class detection require many real training images of objects for many viewpoints.
4
- a 3D class skeleton: The arrangement of part centroids in 3D. The Potemkin 1 model can be viewed as a collection of parts, which are oriented 3D primitives. 4 The Potemkin ModelPotemkin - 2D projective transforms: The shape change of each part from one view to another. 1 So-called “Potemkin villages” were artificial villages, constructed only of facades. Our models, too are constructed of facades.
5
The Potemkin Model multiple 2D models [Crandall07, Torralba04, Leibe07] 5 explicit 3D model [Hoiem07, Yan07] cross-view constraints [Thomas06, Savarese07, Kushal07] Related Approaches Data-Efficiency, Compatibility 2D3D
6
6 Two Uses of the Potemkin Model Multi-View Object Class Detection System 2D Test ImageDetection Result 1.Generate virtual training data 3D Understanding 2. Reconstruct 3D shapes of detected objects
7
7 Outline Potemkin Model BasicGeneralized3D Estimation Class Skeleton Real Training Data Supervised Part Labeling Use Virtual Training Data Generation
8
- K projection matrices 8 Definition of the Basic Potemkin Model 3D Space K view bins - K view bins - a class skeleton (S 1,S 2,…,S N ): class-dependent 2D Transforms - NK 2 transformation matrices A basic Potemkin model for an object class with N parts.
9
9 T,T, Estimating the Basic Potemkin Model Phase 1 - Learn 2D projective transforms from a 3D oriented primitiveprojective transforms view view T2,T2, T3,T3, ……………… 8 Degrees Of Freedom view view T1,T1,
10
10 Estimating the Basic Potemkin Model Phase 2 - We compute 3D class skeleton for the target object class. - Each part needs to be visible in at least two views from the view bins we are interested in. - We need to label the view bins and the parts of objects in real training images.
11
11 Using the Basic Potemkin Model
12
3D Model Synthetic Class-Independent 2D Synthetic Views Shape Primitives Generic Transforms Target Object Class Real Class-Specific Few Labeled Images Skeleton Part Transforms The Basic Potemkin Model EstimatingUsing All Labeled Images Virtual Images Combine Parts Virtual View-Specific 12
13
13 Problem of the Basic Potemkin Model
14
14 Outline Potemkin Model BasicGeneralized3D Estimation Class Skeleton Multiple Primitives Real Training Data Supervised Part Labeling Use Virtual Training Data Generation
15
Multiple Oriented Primitives 2D Transforms 2D views Multiple Primitives 15 An oriented primitive is decided by the 3D shape and the starting view bin. K views View1 View2 ……………………….. View K azimuth elevation azimuth
16
3D Shapes 16 2D Transform T , view view K view bins
17
3D Model Target Object Class All Labeled Images Synthetic Class-Independent Real Class-Specific Few Labeled Images 2D Synthetic Views Primitive Selection Shape Primitives Generic TransformsSkeleton Part Transforms Infer Part Indicator Virtual Images Combine Parts Part Transforms Virtual View-Specific The Potemkin Model EstimatingUsing 17
18
- Find a best set of primitives to model all parts M 18 Greedy Primitive Selection - Four primitives are enough for modeling four object classes (21 object parts). Greedy Selection view view A B ?
19
19 Primitive-Based Representation
20
Better predict what objects look like in novel views Single Primitive Multiple Primitives 20 The Influence of Multiple Primitives
21
21 Virtual Training Images
22
3D Model Target Object Class All Labeled Images Synthetic Class-Independent Real Class-Specific Few Labeled Images 2D Synthetic Views Primitive Selection Shape Primitives Generic TransformsSkeleton Part Transforms Infer Part Indicator Virtual Images Combine Parts Part Transforms Virtual View-Specific The Potemkin Model EstimatingUsing 22
23
23 Outline Potemkin Model BasicGeneralized Estimation Class Skeleton Multiple Primitives Real Training Data Supervised Part Labeling Self- Supervised Part Labeling Use Virtual Training Data Generation
24
Self-Supervised Part LabelingPart Labeling For the target view, choose one model object and label its parts. The model object is then deformed to other objects in the target view for part labeling. 24
25
Multi-View Class Detection Experiment Detector: Crandall’s system (CVPR05, CVPR07) Dataset: cars (partial PASCAL), chairs (collected by LIS) Each view (Real/Virtual Training): 20/100 (chairs), 15/50 (cars) Task: Object/No Object, No viewpoint identification 25 Object Class: ChairObject Class: Car False Positive Rate True Positive Rate Real images Real images from all views Real images Real images from all views Real + Virtual (single primitive) Real images Real images from all views Real + Virtual (single primitive) Real + Virtual (multiple primitives) Real + Virtual (self-supervised) Real images Real images from all views Real + Virtual (single primitive) Real + Virtual (multiple primitives)
26
26 Outline Potemkin Model BasicGeneralized3D Estimation Class Skeleton Multiple Primitives Class Planes Real Training Data Supervised Part Labeling Self- Supervised Part Labeling Use Virtual Training Data Generation
27
27 Definition of the 3D Potemkin Model 3D Space K view bins - K view bins - K projection matrices, K rotation matrices, T R 3 3 - a class skeleton (S 1,S 2,…,S N ) - K part-labeled images -N 3D planes, Q i,(i 1,…N): a i X+b i Y+c i Z+d i =0 A 3D Potemkin model for an object class with N parts.
28
28 3D Representation Efficiently capture prior knowledge of 3D shapes of the target object class. The object class is represented as a collection of parts, which are oriented 3D primitive shapes. This representation is only approximately correct.
29
Estimating 3D Planes 29
30
No Occlusion Handling Occlusion Handling Self-Occlusion Handling 30
31
3D Potemkin Model: Car Minimum requirement: four views of one instance Number of Parts: 8 (right-side, grille, hood, windshield, roof, back-windshield, back-grille, left-side) 31
32
32 Outline Potemkin Model BasicGeneralized3D Estimation Class Skeleton Multiple Primitives Class Planes Real Training Data Supervised Part Labeling Self- Supervised Part Labeling Use Virtual Training Data Generation Single-View 3D Reconstruction
33
Single-View Reconstruction 3D Reconstruction (X, Y, Z) from a Single 2D Image (x im, y im ) - a camera matrix (M), a 3D plane 33
34
Detection (Leibe et al. 07) Segmentation (Li et al. 05) Automatic 3D Reconstruction 3D Class-Specific Reconstruction from a Single 2D Image - a camera matrix (M), a 3D ground plane (a g X+b g Y+c g Z+d g =0) 34 2D Input Self-Supervised Part Registration Geometric Context (Hoiem et al.05) 3D Output 3D Potemkin Model Occluded Part Prediction P1P1 P2P2 offset
35
Hoiem et al. classified image regions into three geometric classes (ground, vertical surfaces, and sky). They treat detected objects as vertical planar surfaces in 3D. They set a default camera matrix and a default 3D ground plane. Application: Photo Pop-up 35
36
Object Pop-up 36 The link of the demo videos: http://people.csail.mit.edu/chiu/demos.htm
37
Depth Map Prediction Match a predicted depth map against available 2.5D data Improve performance of existing 2D detection systems 37
38
Application: Object Detection 38 109 test images and stereo depth maps, 127 annotated cars zszs 15 candidates/image (each candidate c i : bounding box b i, likelihood l i from 2D detector, predicted depth map z i ) scaleoffset Likelihood from detectorDepth consistency Videre Designs zizi
39
Experimental Results 39 Number of car training/test images: 155/109 Murphy-Torralba-Freeman detector (w = 0.5) Dalal-Triggs detector (w=0.6) Murphy-Torralba-Freeman DetectorDalal-Triggs Detector
40
Quality of Reconstruction Calibration: Camera, 3D ground plane (1m by 1.2m table) 20 diecast model cars 40 Averageoverlapcentroid errororientation error Potemkin77.5 % 8.75 mm 2.34 o Single Plane73.95 mm16.26 o Ferrari F1: 26.56%, 24.89 mm, 3.37 o
41
Application: Robot Manipulation 20 diecast model cars, 60 trials Successful grasp: 57/60 (Potemkin), 6/60 (Single Plane) 41 The link of the demo videos: http://people.csail.mit.edu/chiu/demos.htm
42
Application: Robot Manipulation 20 diecast model cars, 60 trials Successful grasp: 57/60 (Potemkin), 6/60 (Single Plane) 42
43
Occluded Part Prediction A Basket instance 43 The link of the demo videos: http://people.csail.mit.edu/chiu/demos.htm
44
Contributions The Potemkin Model: - Provide a middle ground between 2D and 3D - Construct a relatively weak 3D model - Generate virtual training data - Reconstruct 3D objects from a single image Applications - Multi-view object class detection - Object pop-up - Object detection using 2.5D data - Robot Manipulation 44
45
Acknowledgements Thesis committee members - Tómas Lozano-Pérez, Leslie Kaelbling, Bill Freeman Experimental Help - LableMe and detection system: Sam Davies - Robot system: Kaijen Hsiao and Huan Liu - Data collection: Meg A. Lippow and Sarah Finney - Stereo vision: Tom Yeh and Sybor Wang - Others: David Huynh, Yushi Xu, and Hung-An Chang All LIS people My parents and my wife, Ju-Hui 45
46
46 Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.