Visual 3D Modeling using Cameras and Camera Networks

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Stereo matching Class 7 Read Chapter 7 of tutorial Tsukuba dataset.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Multiple View Reconstruction Class 24 Multiple View Geometry Comp Marc Pollefeys.
Self-calibration.
Structure from motion Class 9 Read Chapter 5. 3D photography course schedule (tentative) LectureExercise Sept 26Introduction- Oct. 3Geometry & Camera.
Structure from motion.
Scene Planes and Homographies class 16 Multiple View Geometry Comp Marc Pollefeys.
Projective structure from motion
Multiple View Geometry
Stereo. STEREOPSIS Reading: Chapter 11. The Stereopsis Problem: Fusion and Reconstruction Human Stereopsis and Random Dot Stereograms Cooperative Algorithms.
Robot Vision SS 2008 Matthias Rüther 1 ROBOT VISION Lesson 6: Shape from Stereo Matthias Rüther Slides partial courtesy of Marc Pollefeys Department of.
Computer Vision cmput 613 Sequential 3D Modeling from images using epipolar geometry and F 3D Modeling from images using epipolar geometry and F Martin.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Computing F and rectification class 14 Multiple View Geometry Comp Marc Pollefeys.
Computing F Class 8 Read notes Section 4.2. C1C1 C2C2 l2l2  l1l1 e1e1 e2e2 Fundamental matrix (3x3 rank 2 matrix) 1.Computable from corresponding points.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Multi-view stereo Many slides adapted from S. Seitz.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry
Many slides and illustrations from J. Ponce
Synchronization and Calibration of Camera Networks from Silhouettes Sudipta N. Sinha Marc Pollefeys University of North Carolina at Chapel Hill, USA.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Two-view geometry Epipolar geometry F-matrix comp. 3D reconstruction Structure comp.
Computer Vision Optical Flow Marc Pollefeys COMP 256 Some slides and illustrations from L. Van Gool, T. Darell, B. Horn, Y. Weiss, P. Anandan, M. Black,
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Triangulation and Multi-View Geometry Class 9 Read notes Section 3.3, , 5.1 (if interested, read Triggs’s paper on MVG using tensor notation, see.
Assignment 2 Compute F automatically from image pair (putative matches, 8-point, 7-point, iterative, RANSAC, guided matching) (due by Wednesday 19/03/03)
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Reconstruction Class 23 Multiple View Geometry Comp Marc Pollefeys.
CSE473/573 – Stereo Correspondence
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Epipolar geometry Class 5
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Computer Vision Optical Flow Marc Pollefeys COMP 256 Some slides and illustrations from L. Van Gool, T. Darell, B. Horn, Y. Weiss, P. Anandan, M. Black,
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Epipolar geometry Class 5. Geometric Computer Vision course schedule (tentative) LectureExercise Sept 16Introduction- Sept 23Geometry & Camera modelCamera.
Automatic Camera Calibration
Computer vision: models, learning and inference
Structure from images. Calibration Review: Pinhole Camera.
Stereo Class 7 Read Chapter 7 of tutorial Tsukuba dataset.
CSCE 643 Computer Vision: Structure from Motion
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
Robot Vision SS 2007 Matthias Rüther 1 ROBOT VISION Lesson 6a: Shape from Stereo, short summary Matthias Rüther Slides partial courtesy of Marc Pollefeys.
Computer Vision, Robert Pless
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
HONGIK UNIVERSITY School of Radio Science & Communication Engineering Visual Information Processing Lab Hong-Ik University School of Radio Science & Communication.
Computing F. Content Background: Projective geometry (2D, 3D), Parameter estimation, Algorithm evaluation. Single View: Camera model, Calibration, Single.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
3D reconstruction from uncalibrated images
CSE 185 Introduction to Computer Vision Feature Matching.
776 Computer Vision Jan-Michael Frahm & Enrique Dunn Spring 2013.
Visual Odometry David Nister, CVPR 2004
Vision Sensors for Stereo and Motion Joshua Gluckman Polytechnic University.
3D Reconstruction Using Image Sequence
Auto-calibration we have just calibrated using a calibration object –another calibration object is the Tsai grid of Figure 7.1 on HZ182, which can be used.
776 Computer Vision Jan-Michael Frahm Spring 2012.
EECS 274 Computer Vision Projective Structure from Motion.
CSE 185 Introduction to Computer Vision Stereo 2.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Parameter estimation class 5
3D Photography: Epipolar geometry
Multiple View Geometry Comp Marc Pollefeys
Uncalibrated Geometry & Stratification
Presentation transcript:

Visual 3D Modeling using Cameras and Camera Networks Marc Pollefeys University of North Carolina at Chapel Hill

Visual 3D Modeling using Cameras and Camera Networks Talk outline Introduction Visual 3D modeling with a hand-held camera Acquisition of camera motion Acquisition of scene structure Constructing visual models Camera Networks Camera Network Calibration Camera Network Synchronization Towards Active Camera Networks… Conclusion Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks What can be achieved? Can we get 3D models from images? How much do we need to know about the camera? Can we freely move around? Hand-held? Do we need to keep parameters fixed? Zoom? What about auto-exposure? What about camera networks? Can we provide more flexible systems? Avoid calibration? What about using IP-based PTZ cameras? Hand-held camcorders? Unsynchronized or even asynchronous? Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Talk outline Introduction Visual 3D modeling with a hand-held camera Acquisition of camera motion Acquisition of scene structure Constructing visual models Camera Networks Camera Network Calibration Camera Network Synchronization Towards Active Camera Networks… Conclusion Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks (Pollefeys et al. ’98) Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks (Pollefeys et al. ’04) Video Key-frame selection More efficient RANSAC Fully projective Improved self-calibration Deal with dominant planes Bundle adjustment Polar stereo rectification Deal with radial distortion Faster stereo algorithm Deal with specularities Volumetric 3D integration Deal with Auto-Exposure Image-based rendering Visual 3D Modeling using Cameras and Camera Networks

Feature tracking/matching Shape-from-Photographs: match Harris corners Shape-from-Video: track KLT features Problem: insufficient motion between consecutive video-frames to compute epipolar geometry accurately and use it effectively as an outlier filter Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Key-frame selection Select key-frame when F yields a better model than H Use Robust Geometric Information Criterion Given view i as a key-frame, pick view j as next key-frame for first view where GRIC(Fij)>GRIC(Hij) (or a few views later) (Torr ’98) bad fit penalty model complexity H-GRIC F-GRIC (Pollefeys et al.’02) Visual 3D Modeling using Cameras and Camera Networks

Epipolar geometry Underlying structure in set of matches for rigid scenes Computable from corresponding points Simplifies matching Allows to detect wrong matches Related to calibration C1 C2 l2 P l1 e1 e2 Fundamental matrix (3x3 rank 2 matrix) Visual 3D Modeling using Cameras and Camera Networks

Epipolar geometry computation: robust estimation (RANSAC) Step 1. Extract features Step 2. Compute a set of potential matches Step 3. do Step 3.1 select minimal sample (i.e. 7 matches) Step 3.2 compute solution(s) for F Step 3.3 count inliers, if not promising stop until (#inliers,#samples)<95% (generate hypothesis) (verify hypothesis) Step 4. Compute F based on all inliers Step 5. Look for additional matches Step 6. Refine F based on all correct matches #inliers 90% 80% 70% 60% 50% #samples 5 13 35 106 382 Visual 3D Modeling using Cameras and Camera Networks

Epipolar geometry computation geometric relations between two views is fully described by recovered 3x3 matrix F Visual 3D Modeling using Cameras and Camera Networks

Sequential Structure and Motion Computation Initialize Motion (P1,P2 compatibel with F) Initialize Structure (minimize reprojection error) Extend motion (compute pose through matches seen in 2 or more previous views) Extend structure (Initialize new structure, refine existing structure) Visual 3D Modeling using Cameras and Camera Networks

Dealing with dominant planar scenes (Pollefeys et al., ECCV‘02) USaM fails when common features are all in a plane Solution: part 1 Model selection to detect problem Visual 3D Modeling using Cameras and Camera Networks

Dealing with dominant planar scenes (Pollefeys et al., ECCV‘02) USaM fails when common features are all in a plane Solution: part 2 Delay ambiguous computations until after self-calibration (couple self-calibration over all 3D parts) Visual 3D Modeling using Cameras and Camera Networks

Refine Structure and Motion Use projective bundle adjustment Sparse bundle allows very efficient computation (2 levels) Take radial distortion into account (1 or 2 parameters) Visual 3D Modeling using Cameras and Camera Networks

Self-calibration using absolute conic (Faugeras ECCV’92; Triggs CVPR’97; Pollefeys et al. ICCV’98; etc.) Euclidean projection matrix: some constraints, e.g. constant, no skew,... * * projection constraints Absolute conic projection: Translate constraints on K through projection equation to constraints on * Upgrade from projective to metric Transform structure and motion so that *  diag(1,1,1,0) Visual 3D Modeling using Cameras and Camera Networks

Practical linear self-calibration (Pollefeys et al., ECCV‘02) Don’t treat all constraints equal after normalization! (relatively accurate for most cameras) (only rough aproximation, but still usefull to avoid degenerate configurations) when fixating point at image-center not only absolute quadric diag(1,1,1,0) satisfies ICCV98 eqs., but also diag(1,1,1,a), i.e. real or imaginary spheres! Visual 3D Modeling using Cameras and Camera Networks

Refine Metric Structure and Motion Use metric bundle adjustment Use Euclidean parameterization for projection matrices Same sparseness advantages, also use radial distortion Visual 3D Modeling using Cameras and Camera Networks

Mixing real and virtual elements in video Virtual reconstruction of ancient fountain Preview fragment of sagalassos TV documentary Similar to 2D3‘s Boujou and RealViz‘ MatchMover Visual 3D Modeling using Cameras and Camera Networks

Intermezzo: Auto-calibration of Multi-Projector System (Raij and Pollefeys, submitted) hard because screens are planar, but still possible Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Stereo rectification Resample image to simplify matching process Visual 3D Modeling using Cameras and Camera Networks

Stereo rectification Resample image to simplify matching process Also take into account radial distortion! Visual 3D Modeling using Cameras and Camera Networks

Polar stereo rectification (Pollefeys et al. ICCV’99) Polar reparametrization of images around epipoles Does not work with standard Homography-based approaches Visual 3D Modeling using Cameras and Camera Networks

General iso-disparity surfaces (Pollefeys and Sinha, ECCV’04) Example: polar rectification preserves disp. Application: Active vision Also interesting relation to human horopter Visual 3D Modeling using Cameras and Camera Networks

Stereo matching Constraints Similarity measure epipolar (SSD or NCC) ordering uniqueness disparity limit disparity gradient limit Trade-off Matching cost Discontinuities Similarity measure (SSD or NCC) Optimal path (dynamic programming ) (Cox et al. CVGIP’96; Koch’96; Falkenhagen´97; Van Meerbergen,Vergauwen,Pollefeys,VanGool IJCV‘02) Visual 3D Modeling using Cameras and Camera Networks

Hierarchical stereo matching Allows faster computation Deals with large disparity ranges Downsampling (Gaussian pyramid) Disparity propagation Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Disparity map image I(x,y) Disparity map D(x,y) image I´(x´,y´) (x´,y´)=(x+D(x,y),y) Visual 3D Modeling using Cameras and Camera Networks

Example: reconstruct image from neighbors Visual 3D Modeling using Cameras and Camera Networks

Multi-view depth fusion (Koch, Pollefeys and Van Gool. ECCV‘98) Compute depth for every pixel of reference image Triangulation Use multiple views Up- and down sequence Use Kalman filter Also allows to compute robust texture Visual 3D Modeling using Cameras and Camera Networks

Real-time stereo on GPU (Yang and Pollefeys, CVPR2003) Plane-sweep stereo Computes Sum-of-Square-Differences (use pixelshader) Hardware mip-map generation for aggregation over window Trade-off between small and large support window 150M disparity hypothesis/sec (Radeon9700pro) e.g. 512x512x20disparities at 30Hz (Demo GeForce4) GPU is great for vision too! Visual 3D Modeling using Cameras and Camera Networks

Dealing with specular highlights (Yang, Pollefeys and Welch, ICCV’03) Extend photo-consistency model to include highlights Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks 3D surface model Depth image Triangle mesh Texture image Textured 3D Wireframe model Visual 3D Modeling using Cameras and Camera Networks

Volumetric 3D integration (Curless and Levoy, Siggraph´96) Multiple depth images Volumetric integration Texture integration patchwork texture map Visual 3D Modeling using Cameras and Camera Networks

Dealing with auto-exposure (Kim and Pollefeys, submitted) Estimate cameras radiometric response curve, exposure and white balance changes Extends prior HDR work at Columbia, CMU, etc. to moving camera brightness transfer curve robust estimate using DP auto-exposure fixed-exposure response curve model Visual 3D Modeling using Cameras and Camera Networks

Dealing with auto-exposure (Kim and Pollefeys, submitted) Applications: Photometric alignment of textures (or HDR textures) HDR video Visual 3D Modeling using Cameras and Camera Networks

Part of Jain temple Recorded during post-ICCV tourist trip in India (Nikon F50; Scanned) Visual 3D Modeling using Cameras and Camera Networks

Example: DV video  3D model accuracy ~1/500 from DV video (i.e. 140kb jpegs 576x720) Visual 3D Modeling using Cameras and Camera Networks

Unstructured lightfield rendering (Heigl et al.’99) demo Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Talk outline Introduction Visual 3D modeling with a hand-held camera Acquisition of camera motion Acquisition of scene structure Constructing visual models Camera Networks Camera Network Calibration Camera Network Synchronization towards active camera networks… Conclusion Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks CMU’s Dome, 3D Room, etc. MIT’s Visual Hull Maryland’s Keck lab, ETHZ’s BLUE-C and more Recently, Shape-from-Silhouette/Visual-Hull systems have been very popular Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Offline Calibration Procedure Special Calibration Data Planar Pattern moving LED Requires physical access to environment Active Camera Networks How do we maintain calibration ? Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks An example P. Sand, L. McMillan, and J. Popovic. Continuous Capture of Skin Deformation. ACM Transactions on Graphics 22, 3, 578-586, 2003. 4 NTSC videos recorded by 4 computers for 4 minutes Manually synchronized and calibrated using MoCap system Visual 3D Modeling using Cameras and Camera Networks

Can we do without explicit calibration? Feature-based? Hard to match features between very different views Not many features on foreground Background often doesn’t overlap much between views Silhouette-based? Necessary for visual-hull anyway But approach is not obvious Visual 3D Modeling using Cameras and Camera Networks

Multiple View Geometry of Silhouettes x1 x2 x’1 x’2 Frontier Points Epipolar Tangents Points on Silhouettes in 2 views do not correspond in general except for projected Frontier Points Always at least 2 extremal frontier points per silhouette In general, correspondence only over two views Visual 3D Modeling using Cameras and Camera Networks

Calibration from Silhouettes: prior work Epipolar Geometry from Silhouettes Porril and Pollard, ’91 Astrom, Cipolla and Giblin, ’96 Structure-and-motion from Silhouettes Joshi, Ahuja and Ponce’95 (trinocular rig/rigid object) Vijayakumar, Kriegman and Ponce’96 (orthographic) Wong and Cipolla’01 (circular motion, at least to start) Yezzi and Soatto’03 (only refinement) None really applicable to calibrate visual hull system Visual 3D Modeling using Cameras and Camera Networks

Camera Network Calibration from Silhouettes (Sinha, Pollefeys and McMillan, submitted) 7 or more corresponding frontier points needed to compute epipolar geometry for general motion Hard to find on single silhouette and possibly occluded However, Visual Hull systems record many silhouettes! Visual 3D Modeling using Cameras and Camera Networks

Camera Network Calibration from Silhouettes If we know the epipoles, it is simple Draw 3 outer epipolar tangents (from two silhouettes) Compute corresponding line homography H-T (not unique) Epipolar Geometry F=[e]xH Visual 3D Modeling using Cameras and Camera Networks

Let’s just sample: RANSAC Repeat Generate random hypothesis for epipoles Compute epipolar geometry Verify hypothesis and count inliers until satisfying hypothesis Refine hypothesis minimize symmetric transfer error of frontier points include more inliers Until error and inliers stable (use conservative threshold, e.g. 5 pixels, but abort early if not promising) (use strict threshold, e.g. 1 pixels) We’ll need an efficient representation as we are likely to have to do many trials! Visual 3D Modeling using Cameras and Camera Networks

A Compact Representation for Silhouettes Tangent Envelopes Convex Hull of Silhouette. Tangency Points for a discrete set of angles. Approx. 500 bytes/frame. Hence a whole video sequences easily fits in memory. Tangency Computations are efficient. Visual 3D Modeling using Cameras and Camera Networks

Epipole Hypothesis and Computing H Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Model Verification Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Remarks RANSAC allows efficient exploration of 4D parameter space (i.e. epipole pair) while being robust to imperfect silhouettes Select key-frames to avoid having too many identical constraints (when silhouette is static) Visual 3D Modeling using Cameras and Camera Networks

Reprojection Error and Epipole Hypothesis Distribution 40 best hypothesis out of 30000 Residual Distribution Hypotheses along y-axis Sorted Residuals along x-axis. Pixel Error along z-axis. Typically, 1/5000 samples converges to global minima after non-linear refinement (corresponds to 15 sec. computation time) Visual 3D Modeling using Cameras and Camera Networks

Computed Fundamental Matrices Visual 3D Modeling using Cameras and Camera Networks

Computed Fundamental Matrices F computed directly (black epipolar lines) F after consistent 3D reconstruction (color) Visual 3D Modeling using Cameras and Camera Networks

Computed Fundamental Matrices F computed directly (black epipolar lines) F after consistent 3D reconstruction (color) Visual 3D Modeling using Cameras and Camera Networks

From epipolar geometry to full calibration Not trivial because only matches between two views Approach similar to Levi et al. CVPR’03, but practical Key step is to solve for camera triplet Assemble complete camera network projective bundle, self-calibration, metric bundle (v is 4-vector ) (also linear in v) Choose P3 corresponding to closest Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Experiment 4 video sequences at 30 fps. All F Matrices computed from silhouettes Full calibration Visual 3D Modeling using Cameras and Camera Networks

Metric Cameras and Visual-Hull Reconstruction from 4 views Final calibration quality comparable to explicit calibration procedure Visual 3D Modeling using Cameras and Camera Networks

What if the videos are unsynchronized? For videos recorded at a constant framerate, same contraints are valid, up to some extra unknown temporal offsets Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Synchronization and calibration from silhouettes (Sinha and Pollefeys, submitted) Add a random temporal offset to RANSAC hypothesis generation, sample more Use multi-resolution approach: Key-frames with slow motion, rough synchronization Add key-frames with faster motion, refine synchronization Visual 3D Modeling using Cameras and Camera Networks

Synchronization experiment Total temporal offset search range [-500,+500] (i.e. ±15s) Unique peaks for correct offsets Possibility for sub-frame synchronization Visual 3D Modeling using Cameras and Camera Networks

Synchronize camera network Consider oriented graph with offsets as branch value For consistency loops should add up to zero MLE by minimizing in frames (=1/30s) +3 -5 +8 +6 +2 ground truth Visual 3D Modeling using Cameras and Camera Networks

Towards active camera networks Provide much more flexibility by making use of pan-tilt-zoom range, networked cameras (maintaining) calibration is a challenge up to 3Gpix! Visual 3D Modeling using Cameras and Camera Networks

Calibration of PTZ cameras similar to Collins and Tsin ’99, but with varying radial distortion Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Conclusion 3D models from video, more flexibility, more general Camera networks synchronization and calibration, just from silhouettes, great for visual-hull systems Future plans Deal with sub-frame offset for VH reconstruction Extend to active camera network (PTZ cameras) Extend to asynchronous video streams (IP cameras) view01 Visual 3D Modeling using Cameras and Camera Networks

Visual 3D Modeling using Cameras and Camera Networks Acknowledgment NSF Career, NSF ITR on 3D-TV, DARPA seedling, Link foundation EU ACTS VANGUARD, ITEA BEYOND, EU IST MURALE, FWO-Vlaanderen Sudipta Sinha, Ruigang Yang, Seon Joo Kim, Andrew Raij, Greg Welch, Leonard McMillan (UNC) Maarten Vergauwen, Frank Verbiest, Kurt Cornelis, Jan Tops, Luc Van Gool (KULeuven), Reinhard Koch (UKiel), Benno Heigl Visual 3D Modeling using Cameras and Camera Networks