Determining the location and orientation of webcams using natural scene variations Nathan Jacobs.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Scalability with Many Lights 1 Lightcuts & Multidimensonal Lightcuts Course: Realistic Rendering with Many-Light Methods Note: please see website for the.
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
Face Alignment with Part-Based Modeling
Patch to the Future: Unsupervised Visual Prediction
From Virtual to Reality: Fast Adaptation of Virtual Object Detectors to Real Domains Baochen Sun and Kate Saenko UMass Lowell.
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
Shadow Scanning [Bouguet 1999] Desk Lamp Camera Stick or pencil Desk The idea [Bouguet and Perona’98] J.-Y. Bouguet and P. Perona, “3D Photography on your.
Computer vision: models, learning and inference
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Detecting Pedestrians by Learning Shapelet Features
ECCV 2002 Removing Shadows From Images G. D. Finlayson 1, S.D. Hordley 1 & M.S. Drew 2 1 School of Information Systems, University of East Anglia, UK 2.
Non-metric affinity propagation for unsupervised image categorization Delbert Dueck and Brendan J. Frey ICCV 2007.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Shadow removal algorithms Shadow removal seminar Pavel Knur.
International Conference on Image Analysis and Recognition (ICIAR’09). Halifax, Canada, 6-8 July Video Compression and Retrieval of Moving Object.
Motion based Correspondence for Distributed 3D tracking of multiple dim objects Ashok Veeraraghavan.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Computing motion between images
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Computer Science Department Columbia University IEEE CVPR Conference.
Illumination Normalization with Time-Dependent Intrinsic Images for Video Surveillance Yasuyuki Matsushita, Member, IEEE, Ko Nishino, Member, IEEE, Katsushi.
Background Subtraction for Urban Traffic Monitoring using Webcams Master Graduation Project Final Presentation Supervisor: Rein van den Boomgaard Mark.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Noise Estimation from a Single Image Ce Liu William T. FreemanRichard Szeliski Sing Bing Kang.
Matthew Brown University of British Columbia (prev.) Microsoft Research [ Collaborators: † Simon Winder, *Gang Hua, † Rick Szeliski † =MS Research, *=MS.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Online Tracking of Outdoor Lighting Variations for Augmented Reality with Moving Cameras Yanli Liu 1,2 and Xavier Granier 2,3,4 1: College of Computer.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Vision System for Wing Beat Analysis of Bats in the Wild 1 Boston University Department of Computer Science 2 Boston University Department of Biology Mikhail.
3D SLAM for Omni-directional Camera
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Metrology 1.Perspective distortion. 2.Depth is lost.
Lighting affects appearance. How do we represent light? (1) Ideal distant point source: - No cast shadows - Light distant - Three parameters - Example:
The Quotient Image: Class-based Recognition and Synthesis Under Varying Illumination T. Riklin-Raviv and A. Shashua Institute of Computer Science Hebrew.
1 Markov Random Fields with Efficient Approximations Yuri Boykov, Olga Veksler, Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Computer Vision Group University of California Berkeley On Visual Recognition Jitendra Malik UC Berkeley.
CSE 185 Introduction to Computer Vision Face Recognition.
3D reconstruction from uncalibrated images
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
3D Reconstruction Using Image Sequence
Quiz Week 8 Topical. Topical Quiz (Section 2) What is the difference between Computer Vision and Computer Graphics What is the difference between Computer.
Learning Photographic Global Tonal Adjustment with a Database of Input / Output Image Pairs.
An Improved Approach For Image Matching Using Principle Component Analysis(PCA An Improved Approach For Image Matching Using Principle Component Analysis(PCA.
Multi-view Synchronization of Human Actions and Dynamic Scenes Emilie Dexter, Patrick Pérez, Ivan Laptev INRIA Rennes - Bretagne Atlantique
Martina Uray Heinz Mayer Joanneum Research Graz Institute of Digital Image Processing Horst Bischof Graz University of Technology Institute for Computer.
1 What do color changes reveal about an outdoor scene? KalyanFabianoWojciechToddHanspeter SunkavalliRomeiroMatusikZicklerPfister Harvard University Adobe.
Using Clouds Shadows to Infer Scene Structure and Camera Calibration
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Multiple View Geometry
Answering ‘Where am I?’ by Nonlinear Least Squares
Markov Random Fields with Efficient Approximations
Efficient Estimation of Residual Trajectory Deviations from SAR data
Final Year Project Presentation --- Magic Paint Face
Modeling the world with photos
Cheng-Ming Huang, Wen-Hung Liao Department of Computer Science
Structure from motion Input: Output: (Tomasi and Kanade)
Hyperspectral Image preprocessing
Binocular Stereo Vision
Dr. Borji Aisha Urooj Cecilia La Place
Structure from motion Input: Output: (Tomasi and Kanade)
Presentation transcript:

determining the location and orientation of webcams using natural scene variations Nathan Jacobs

Let’s learn some things about the cameras first. 2 Let’s use webcams for science.

Where is the webcam? What direction is it pointing? given only a webcam’s URL 3

Where is this webcam? What direction is it pointing? 4

Where are these webcams? 5

our idea: use many images 6

talk overview our test dataset of webcam images examples of natural scene variations method for determining location method for determining orientation 7

our test dataset: the archive of many outdoor scenes 1000 webcams x 3 years 39 million images many examples of how the appearance of the world changes over time 8

9

a year of images from one webcam 10

daily variations noonsunrisesunset examples of natural variations 11

day to day variations 12

seasonal variations 13

14 the webcam geo-localization problem Given: a sequence of time-stamped images Output: the geographic location of the camera 14

existing localization methods static image features tracking shadows cast on the ground computer vision sextant network address lookup 15

Our approach use many images extract time-series signals that correspond to the natural scene variations use the fact that natural scene variations depend on location 16

= + f 1 ( t ) + f 2 ( t ) +... component 1 component 2mean Image coefficient 1 coefficient 2 use PCA to convert images to low-dimensional time-series 17 image at time t difference from mean at time t

Camera 1 Camera 2 Camera 3 Camera 4 = + f 1 ( t ) + f 2 ( t ) +... component 1 component 2mean Image 18

our geo-location algorithm 1.Compute PCA coefficients from some subset of images from the camera (~one month). 2.Create a geo-registered satellite map for each timestamp that we have an image. 3.Reconstruct the time-series of each satellite pixel linearly using the time-series of the leading PCA coefficients. 4.Choose the best: The map pixel with the lowest reconstruction error is the estimated location of the camera. 19 ICCV 2007

choosing the webcam images and the satellite maps PCA on all images: first coefficients depend on sun position PCA on many days of images at noon: first coefficients depend on weather conditions 20

localization using sunlight images 21

localization using satellite imagery 22

23 the camera orientation problem Given: a sequence of time-stamped images Output: the geographic orientation of the camera

geo-orientation algorithm overview Assume that the camera location is known. 1.Find pixels that image sky. 2.Create synthetic hemispherical sky-appearance images. 3.Match sky pixels to synthetic sky-appearance model. 24 WACV 2008

Step 1: Find sky pixels Algorithm: 1.Solve for component images using PCA. 2.Threshold each pixel on the value of component 1. 25

Step 2: creating synthetic sky image Preetham et al. “A practical model for daylight”, SIGGRAPH ’99. For each time we have an image: 1.compute sun direction (we know time and location) 2.create synthetic sky image (using analytical model) 26

simulatedrectangular sub-images 27

Step 3: computing match score Westward facing camera Same camera, sun images dropped South facing camera East facing camera 1.Compute normalized cross- correlation between pairs of synthetic and real sky image. 2.Average the results. 28

Conclusions Natural variations are a strong cue for location and orientation. We have automated methods of using these cues. Future work estimate scene structure estimate other camera parameters use cameras for science 29

Thanks Collaborators – Robert Pless – Nathaniel Roman – Scott Satkin – Walker Burgin – Richard Speyer Partially supported by NSF Career award IIS Image credits – Bernie Bernard TDI-Brooks International, Inc. 30