Miniature faking In close-up photo, the depth of field is limited.

Slides:



Advertisements
Similar presentations
Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
Advertisements

TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Camera calibration and epipolar geometry
Review Previous section: – Feature detection and matching – Model fitting and outlier rejection.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Single-view geometry Odilon Redon, Cyclops, 1914.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Wednesday March 23 Kristen Grauman UT-Austin
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Robust fitting Prof. Noah Snavely CS1114
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Image Stitching Ali Farhadi CSE 455
Epipolar Geometry and Stereo Vision Computer Vision ECE 5554 Virginia Tech Devi Parikh 10/15/13 These slides have been borrowed from Derek Hoiem and Kristen.
CSCE 643 Computer Vision: Structure from Motion
Miniature faking In close-up photo, the depth of field is limited.
Why is computer vision difficult?
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Feature Matching. Feature Space Outlier Rejection.
COS429 Computer Vision =++ Assignment 4 Cloning Yourself.
CSE 185 Introduction to Computer Vision Feature Matching.
Project 3 questions? Interest Points and Instance Recognition Computer Vision CS 143, Brown James Hays 10/21/11 Many slides from Kristen Grauman and.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
776 Computer Vision Jan-Michael Frahm Spring 2012.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
CS 4501: Introduction to Computer Vision Sparse Feature Descriptors: SIFT, Model Fitting, Panorama Stitching. Connelly Barnes Slides from Jason Lawrence,
Think-Pair-Share What visual or physiological cues help us to perceive 3D shape and depth?
Interest Points EE/CSE 576 Linda Shapiro.
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Undergrad HTAs / TAs Help me make the course better!
Local features: main components
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
TP12 - Local features: detection and description
Source: D. Lowe, L. Fei-Fei Recap: edge detection Source: D. Lowe, L. Fei-Fei.
Depth from disparity (x´,y´)=(x+D(x,y), y)
Lecture 4: Harris corner detection
Digital Visual Effects, Spring 2007 Yung-Yu Chuang 2007/4/17
Stereo and Structure from Motion
Local features: detection and description May 11th, 2017
Miniature faking In close-up photo, the depth of field is limited.
RANSAC and mosaic wrap-up
CS 2750: Machine Learning Linear Regression
Corners and Interest Points
A special case of calibration
Common Classification Tasks
What have we learned so far?
Photo by Carl Warner.
Multiple View Geometry for Robotics
Noah Snavely.
Multi-view geometry.
CSE 185 Introduction to Computer Vision
Feature descriptors and matching
Lecture 5: Feature invariance
Calibration and homographies
Lecture 15: Structure from motion
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Miniature faking In close-up photo, the depth of field is limited. http://en.wikipedia.org/wiki/File:Jodhpur_tilt_shift.jpg CS 376 Lecture 16: Stereo 1

Miniature faking

Miniature faking http://en.wikipedia.org/wiki/File:Oregon_State_Beavers_Tilt-Shift_Miniature_Greg_Keene.jpg CS 376 Lecture 16: Stereo 1

Review Previous section: Feature detection and matching Model fitting and outlier rejection

Review: Interest points Keypoint detection: repeatable and distinctive Corners, blobs, stable regions Harris, DoG, MSER

Harris Detector [Harris88] Second moment matrix 1. Image derivatives Ix Iy (optionally, blur first) Ix2 Iy2 IxIy 2. Square of derivatives g(IxIy) 3. Gaussian filter g(sI) g(Ix2) g(Iy2) 4. Cornerness function – both eigenvalues are strong 5. Non-maxima suppression har

Review: Local Descriptors Most features can be thought of as templates, histograms (counts), or combinations Most available descriptors focus on edge/gradient information Capture texture information Color rarely used K. Grauman, B. Leibe

Review: Hough transform y m b x x y m 3 5 2 7 11 10 4 1 b 􀁑 Basic ideas • A line is represented as . Every line in the image corresponds to a point in the parameter space • Every point in the image domain corresponds to a line in the parameter space (why ? Fix (x,y), (m,n) can change on the line . In other words, for all the lines passing through (x,y) in the image space, their parameters form a line in the parameter space) • Points along a line in the space correspond to lines passing through the same point in the parameter space (why ?) Slide from S. Savarese

Review: RANSAC Algorithm: Sample (randomly) the number of points required to fit the model (#=2) Solve for model parameters using samples Score by the fraction of inliers within a preset threshold of the model Repeat 1-3 until the best model is found with high confidence Let me give you an intuition of what is going on. Suppose we have the standard line fitting problem in presence of outliers. We can formulate this problem as follows: want to find the best partition of points in inlier set and outlier set such that… The objective consists of adjusting the parameters of a model function so as to best fit a data set. "best" is defined by a function f that needs to be minimized. Such that the best parameter of fitting the line give rise to a residual error lower that delta as when the sum, S, of squared residuals

Review: 2D image transformations Szeliski 2.1

This section – multiple views Today – Intro to multiple views and Stereo Wednesday – Camera calibration Friday – Fundamental Matrix Monday – Optical Flow Wednesday – Multiview wrapup CS 376 Lecture 16: Stereo 1

Oriented and Translated Camera jw t kw Ow iw

Degrees of freedom 5 6 How many known points are needed to estimate this?

How to calibrate the camera?

Computer Vision James Hays Stereo: Intro Computer Vision James Hays Slides by Kristen Grauman CS 376 Lecture 16: Stereo 1

Multiple views stereo vision structure from motion optical flow Lowe Hartley and Zisserman CS 376 Lecture 16: Stereo 1

Why multiple views? Structure and depth are inherently ambiguous from single views. Images from Lana Lazebnik CS 376 Lecture 16: Stereo 1

Phil Tippett and Jon Berg stop-motion animate a shot with all three walkers (the background walkers are cutouts). Originally the plan had been to photograph the walkers against four-by-five Ektachrome transparencies shot in Norway; when these didn’t work as planned, matte artist Mike Pangrazio copied select Ektachromes onto large scenic backings.

Why multiple views? Structure and depth are inherently ambiguous from single views. P1 P2 P1’=P2’ Optical center CS 376 Lecture 16: Stereo 1

What cues help us to perceive 3d shape and depth?

Shading [Figure from Prados & Faugeras 2006] CS 376 Lecture 16: Stereo 1 21

Focus/defocus Images from same point of view, different camera parameters 3d shape / depth estimates [figs from H. Jin and P. Favaro, 2002] CS 376 Lecture 16: Stereo 1

Texture CS 376 Lecture 16: Stereo 1 23 [From A.M. Loh. The recovery of 3-D structure using visual texture patterns. PhD thesis] CS 376 Lecture 16: Stereo 1 23

Perspective effects Image credit: S. Seitz

Motion CS 376 Lecture 16: Stereo 1 Figures from L. Zhang http://www.brainconnection.com/teasers/?main=illusion/motion-shape CS 376 Lecture 16: Stereo 1

Occlusion CS 376 Lecture 16: Stereo 1 http://rene-magritte-paintings.blogspot.com/2009/11/le-blanc-seing.html Rene Magritt'e famous painting Le Blanc-Seing (literal translation: "The Blank Signature") roughly translates as "free hand" or "free rein". CS 376 Lecture 16: Stereo 1

If stereo were critical for depth perception, navigation, recognition, etc., then this would be a problem

Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates of that point ? Structure from motion solves the following problem: Given a set of images of a static scene with 2D points in correspondence, shown here as color-coded points, find… a set of 3D points P and a rotation R and position t of the cameras that explain the observed correspondences. In other words, when we project a point into any of the cameras, the reprojection error between the projected and observed 2D points is low. This problem can be formulated as an optimization problem where we want to find the rotations R, positions t, and 3D point locations P that minimize sum of squared reprojection errors f. This is a non-linear least squares problem and can be solved with algorithms such as Levenberg-Marquart. However, because the problem is non-linear, it can be susceptible to local minima. Therefore, it’s important to initialize the parameters of the system carefully. In addition, we need to be able to deal with erroneous correspondences. Camera 1 Camera 3 Camera 2 R1,t1 R3,t3 R2,t2 Slide credit: Noah Snavely

Multi-view geometry problems Stereo correspondence: Given a point in one of the images, where could its corresponding points be in the other images? Structure from motion solves the following problem: Given a set of images of a static scene with 2D points in correspondence, shown here as color-coded points, find… a set of 3D points P and a rotation R and position t of the cameras that explain the observed correspondences. In other words, when we project a point into any of the cameras, the reprojection error between the projected and observed 2D points is low. This problem can be formulated as an optimization problem where we want to find the rotations R, positions t, and 3D point locations P that minimize sum of squared reprojection errors f. This is a non-linear least squares problem and can be solved with algorithms such as Levenberg-Marquart. However, because the problem is non-linear, it can be susceptible to local minima. Therefore, it’s important to initialize the parameters of the system carefully. In addition, we need to be able to deal with erroneous correspondences. Camera 1 Camera 3 Camera 2 R1,t1 R3,t3 R2,t2 Slide credit: Noah Snavely

Multi-view geometry problems Motion: Given a set of corresponding points in two or more images, compute the camera parameters Structure from motion solves the following problem: Given a set of images of a static scene with 2D points in correspondence, shown here as color-coded points, find… a set of 3D points P and a rotation R and position t of the cameras that explain the observed correspondences. In other words, when we project a point into any of the cameras, the reprojection error between the projected and observed 2D points is low. This problem can be formulated as an optimization problem where we want to find the rotations R, positions t, and 3D point locations P that minimize sum of squared reprojection errors f. This is a non-linear least squares problem and can be solved with algorithms such as Levenberg-Marquart. However, because the problem is non-linear, it can be susceptible to local minima. Therefore, it’s important to initialize the parameters of the system carefully. In addition, we need to be able to deal with erroneous correspondences. ? Camera 1 ? Camera 3 Camera 2 ? R1,t1 R3,t3 R2,t2 Slide credit: Noah Snavely

Human eye Rough analogy with human visual system: Pupil/Iris – control amount of light passing through lens Retina - contains sensor cells, where image is formed Fovea – highest concentration of cones Fig from Shapiro and Stockman CS 376 Lecture 16: Stereo 1

Human stereopsis: disparity Human eyes fixate on point in space – rotate so that corresponding images form in centers of fovea. CS 376 Lecture 16: Stereo 1 33

Human stereopsis: disparity Disparity occurs when eyes fixate on one object; others appear at different visual angles CS 376 Lecture 16: Stereo 1 34

Stereo photography and stereo viewers Take two pictures of the same subject from two slightly different viewpoints and display so that each eye sees only one of the images. Image from fisher-price.com Invented by Sir Charles Wheatstone, 1838 CS 376 Lecture 16: Stereo 1

http://www.johnsonshawmuseum.org CS 376 Lecture 16: Stereo 1 41

http://www.johnsonshawmuseum.org CS 376 Lecture 16: Stereo 1 42

Public Library, Stereoscopic Looking Room, Chicago, by Phillips, 1923 CS 376 Lecture 16: Stereo 1 43

http://www.well.com/~jimg/stereo/stereo_list.html CS 376 Lecture 16: Stereo 1 44

Autostereograms Exploit disparity as depth cue using single image. (Single image random dot stereogram, Single image stereogram) Images from magiceye.com CS 376 Lecture 16: Stereo 1 45

Autostereograms Images from magiceye.com CS 376 Lecture 16: Stereo 1 46

Estimating depth with stereo Stereo: shape from “motion” between two views We’ll need to consider: Info on camera pose (“calibration”) Image point correspondences scene point image plane optical center CS 376 Lecture 16: Stereo 1 47

Stereo vision Two cameras, simultaneous views Single moving camera and static scene

Camera parameters Camera frame 2 Camera frame 1 Extrinsic parameters: Camera frame 1  Camera frame 2 Intrinsic parameters: Image coordinates relative to camera  Pixel coordinates Extrinsic params: rotation matrix and translation vector Intrinsic params: focal length, pixel sizes (mm), image center point, radial distortion parameters We’ll assume for now that these parameters are given and fixed. CS 376 Lecture 16: Stereo 1 49

Geometry for a simple stereo system First, assuming parallel optical axes, known camera parameters (i.e., calibrated cameras): CS 376 Lecture 16: Stereo 1

optical center (right) optical center (left) World point Depth of p image point (left) image point (right) Focal length optical center (right) optical center (left) baseline CS 376 Lecture 16: Stereo 1 51

Geometry for a simple stereo system Assume parallel optical axes, known camera parameters (i.e., calibrated cameras). What is expression for Z? Similar triangles (pl, P, pr) and (Ol, P, Or): disparity CS 376 Lecture 16: Stereo 1 52

Depth from disparity (x´,y´)=(x+D(x,y), y) image I(x,y) Disparity map D(x,y) image I´(x´,y´) (x´,y´)=(x+D(x,y), y) So if we could find the corresponding points in two images, we could estimate relative depth… CS 376 Lecture 16: Stereo 1

Where do we need to search? To be continued… CS 376 Lecture 16: Stereo 1