Using Clouds Shadows to Infer Scene Structure and Camera Calibration

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

What do color changes reveal about an outdoor scene? Kalyan Sunkavalli Fabiano Romeiro Wojciech Matusik Todd Zickler Hanspeter Pfister Harvard University.
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
875: Recent Advances in Geometric Computer Vision & Recognition
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
Lecture 8: Stereo.
Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1, Lehigh University.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Camera calibration and epipolar geometry
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Correlation and Autocorrelation
Last Time Pinhole camera model, projection
Multi-view stereo Many slides adapted from S. Seitz.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Flexible Bump Map Capture From Video James A. Paterson and Andrew W. Fitzgibbon University of Oxford Calibration Requirement:
The plan for today Camera matrix
Lecture 32: Photometric stereo, Part 2 CS4670: Computer Vision Noah Snavely.
Structured Light in Scattering Media Srinivasa Narasimhan Sanjeev Koppal Robotics Institute Carnegie Mellon University Sponsor: ONR Shree Nayar Bo Sun.
CSE473/573 – Stereo Correspondence
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Structure from images. Calibration Review: Pinhole Camera.
Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras Yanli Liu 1,2 and Xavier Granier 2,3,4 1: College of Computer.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Epipolar geometry The fundamental matrix and the tensor
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
#MOTION ESTIMATION AND OCCLUSION DETECTION #BLURRED VIDEO WITH LAYERS
Stereo Many slides adapted from Steve Seitz.
Dynamic Refraction Stereo 7. Contributions Refractive disparity optimization gives stable reconstructions regardless of surface shape Require no geometric.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Determining the location and orientation of webcams using natural scene variations Nathan Jacobs.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Color and Brightness Constancy Jim Rehg CS 4495/7495 Computer Vision Lecture 25 & 26 Wed Oct 18, 2002.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Geometry Reconstruction March 22, Fundamental Matrix An important problem: Determine the epipolar geometry. That is, the correspondence between.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
ICCV 2007 Optimization & Learning for Registration of Moving Dynamic Textures Junzhou Huang 1, Xiaolei Huang 2, Dimitris Metaxas 1 Rutgers University 1,
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
CSE 185 Introduction to Computer Vision Stereo 2.
MAN-522 Computer Vision Spring
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Stereo CSE 455 Linda Shapiro.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Digital Visual Effects, Spring 2008 Yung-Yu Chuang 2008/4/15
The Brightness Constraint
Common Classification Tasks
The Brightness Constraint
The Brightness Constraint
Digital Visual Effects Yung-Yu Chuang
Multiple View Geometry for Robotics
Announcements Today: evals Monday: project presentations (8 min talks)
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Shape from Shading and Texture
Occlusion and smoothness probabilities in 3D cluttered scenes
Presentation transcript:

Using Clouds Shadows to Infer Scene Structure and Camera Calibration Nathan Jacobs, Brian Bies, Robert Pless jacobsn@cse.wustl.edu http://www.cse.wustl.edu/~jacobsn/

time lapse videos in the wild Today there are hundreds of thousands of outdoor cameras, sitting still and watching the world pass by. I am interested in geolocating and calibrating these cameras and in inferring properties of the scenes they view. webcam dataset: http://amos.cse.wustl.edu

Here is typical time-lapse video of a partly cloud day captured by a static camera. Today I will show you how to turn video like this into a depth map.

related work Bouguet and Perona. ICCV 98 Caspi and Werman. CVPR 06 Kawasaki and Furukawa. IJCV 08 Sunkavalli, Romeiro, Matusik, Zickler and Pfister. CVPR 2008 Koppal and Narasimhan, PAMI 08 Shen and Tan, CVPR 09 There is a lot of related work in inferring geometric properties of scenes using natural phenomena. Here I will highlight a few of the types of cues others have focused on. First, is work that attempts to estimate scene geometry by tracking shadows cast by stationary objects. Second is work that uses photometric cues to estimate surface normals. Third, is work that reasons about the relationship of haze and fog to depth. And, finally, is work that looks how stochastically structured light patterns can be used to Schechner, Narasimhan and Nayar. CVPR 01 Narasimhan and Nayar. CVPR 03 He, Sun and Tang. CVPR 09 Zhang, Curless and Seitz. CVPR 2003 Swirski, Schechner, Herzberg and Negahdaripour. ICCV 09

outline: from clouds to depth maps spatial cue: nearby points see similar clouds depth estimation using gradient descent optimization algorithm depth estimation using linear constraints followed by search temporal delay cue: wind pushes clouds across the scene

spatial cue First Law of Geography: Everything is related to everything else, but near things are more related than distant things. -Waldo Tobler The spatial cue is an instance of the first law of geography which states that

spatial cue The reason this applies to our setting is that the closer two points are in the world the more likely they are to be under the shadow of the same cloud.

temporal correlation is related to distance We can see this relationship in the following false color image. This image was constructed by… These correlation maps are the input to our algorithm. Our algorithm works be explicitly modeling this relationship

What is the relationship between correlation and distance? The big question is “…” The answer is… it depends. Pixel Intensities time

Nonmetric Multidimensional Scaling with projective constraints. algorithm overview Nonmetric Multidimensional Scaling with projective constraints. compute correlation between pairs of pixels estimate focal length and create an initial planar depth map iterate until convergence use current depth map to compute the distance between points estimate correlation to distance mapping update depth map to minimize error in distances correlation distance

detail in buildings no post processing

estimating the correlation to distance mapping expected value of distance given correlation estimate mapping using monotonic regression non-parametric constrained to be monotonically decreasing minimize L1-norm linear programming solution

improving an existing depth map depths correlation to distance mapping distances implied by depth map weights

improving an existing depth map Pixel rays Camera Location

recap of the spatial cue assume: monotonically decreasing relationship between correlation and distance algorithm: compute temporal similarity between pairs of pixels use modified NMDS to estimate a depth map

temporal delay cue x z y Now I will describe the temporal delay cue. Here’s how it works: y sees roughly the same pattern of clouds as x but with a temporal delay z has the same temporal delay as y but sees a slightly different part of the clouds this is the temporal delay cue

linear constraints on location unknown given estimate from images x y W

wind direction estimating delay: find the temporal delay that maximizes correlation hue: estimated temporal delay brightness: confidence in estimate

linear constraints on depth unknown given estimate from images W x rank deficient set of linear constraints z y

from constraints to a depth map Uncertainty one dimensional search! (along the null space) simpler optimization

another depth map from delay cue I just want to take a moment to emphasize what I think is one of the most exciting aspects of the temporal delay cue. Because the delay constraints are based on wind velocity the depths we estimate have units of meters. Estimating metric depth from a single camera view notoriously challenging task, and we have introduced a new cue that makes it possible.

summary: depth from clouds two new cues for depth estimation spatial cue works with very low frame rate NMDS + projective constraints temporal delay cue requires higher frame rate simpler optimization possibility of metric depth What is it? Why can we do it? Why is it cool? Why is it important?

Questions? acknowledgements funding: NSF IIS-0546383 time lapse sequences: Martin Setvak Nathan Jacobs http://www.cse.wustl.edu/~jacobsn/

finding an initial depth map pairwise distances correlation lowest error

the ambiguity in the depth map

null space search null space of constraints