COMP 290 Computer Vision - Spring 2000 1 - Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.

Slides:



Advertisements
Similar presentations
Investigation Into Optical Flow Problem in the Presence of Spatially-varying Motion Blur Mohammad Hossein Daraei June 2014 University.
Advertisements

MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Two-View Geometry CS Sastry and Yang
Chapter 6 Feature-based alignment Advanced Computer Vision.
Computer Vision Optical Flow
Structure from motion.
3D Computer Vision and Video Computing 3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2011 Cover Image/video credits: Rick Szeliski, MSR Zhigang.
Optical Flow Methods 2007/8/9.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Computing motion between images
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
3D Computer Vision and Video Computing 3D Vision Topic 5 of Part II Visual Motion CSc I6716 Fall 2006 Cover Image/video credits: Rick Szeliski, MSR Zhigang.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Motion Computing in Image Analysis
Optical Flow Estimation using Variational Techniques Darya Frolova.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Field and Optical Flow. Outline Motion Field and Optical Flow Definition, Example, Relation Optical Flow Constraint Equation Assumptions & Derivation,
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
3D Motion Estimation. 3D model construction Video Manipulation.
Optical Flow Digital Photography CSE558, Spring 2003 Richard Szeliski (notes cribbed from P. Anandan)
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Motion and optical flow Thursday, Nov 20 Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys, S. Lazebnik.
Computer vision: models, learning and inference
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
The Brightness Constraint
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
The Measurement of Visual Motion P. Anandan Microsoft Research.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
3D Computer Vision and Video Computing 3D Vision Lecture 6. Visual Motion CSc80000 Section 2 Spring 2005 Zhigang Zhu, Rm 4439 Cover Image/video credits:
3D Imaging Motion.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Linearizing (assuming small (u,v)): Brightness Constancy Equation: The Brightness Constraint Where:),(),(yxJyxII t  Each pixel provides 1 equation in.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Image Motion. The Information from Image Motion 3D motion between observer and scene + structure of the scene –Wallach O’Connell (1953): Kinetic depth.
MOTION Model. Road Map Motion Model Non Parametric Motion Field : Algorithms 1.Optical flow field estimation. 2.Block based motion estimation. 3.Pel –recursive.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Motion and Optical Flow
3D Vision Topic 4 of Part II Visual Motion CSc I6716 Fall 2009
The Brightness Constraint
3D Motion Estimation.
Epipolar geometry.
Range Imaging Through Triangulation
The Brightness Constraint
3D Vision Topic 5 of Part II Visual Motion CSc I6716 Fall 2007
The Brightness Constraint
Uncalibrated Geometry & Stratification
Coupled Horn-Schunck and Lukas-Kanade for image processing
3D Vision Topic 5 of Part II Visual Motion CSc I6716 Spring 2008
3D Vision Lecture 5. Visual Motion CSc83300 Spring 2006
Optical flow and keypoint tracking
Presentation transcript:

COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim

COMP 290 Computer Vision - Spring Outline Estimating the motion field –Differential technique Optical flow algorithm (Trucco) Filling in optical flow / segmentation of multiple objects (Horn) Validity of optical flow –Feature-based technique Feature matching / Feature tracking –Recovering 3-D motion & structure

COMP 290 Computer Vision - Spring Estimating the Motion Field –Ref) Trucco, Chapter 8.4 –Differential Techniques : based on spatial & temporal variations of the image at all pixels. –Matching (feature-based) techniques : rely on special image points (features) and track them through frames.

COMP 290 Computer Vision - Spring Differential Techniques : Optical Flow Optical Flow Algorithm (Trucco, p196) –For each pixel p, we must satisfy  I v + dI/dt = 0 Assumption : we assume that this equation holds in the neighborhood of p with constant v. we write this equation for a small (typically 5x5) patch centered at p. Then we find least square fit of v - this is the calculated optical flow for pixel p.

COMP 290 Computer Vision - Spring Differential Techniques : Optical Flow –We assumed that  I v + dI/dt = 0 holds in the neighborhood of p with constant v. –Can we justify it? –Yes : In case of rigid motion, the motion field of a moving plane is a quadratic polynomial in the coordinates (x, y, f) of the image points. Therefore, if the object is smooth & rigid, we can assume the motion field varies smoothly. cf) Trucco p187

COMP 290 Computer Vision - Spring Filling in Optical Flow Information Ref) Horn : Chapter 12 Ref) Determining Optical Flow, Horn & Schunck (Artificial Intelligence 17(1981) p ) –For any point, we can have optical flow information in only one direction (parallel to  I : orthogonal to the boundary). –We must ‘fill’ the image with these information from boundaries.

COMP 290 Computer Vision - Spring Filling in Optical Flow Information How? –Two criteria e s : The optical flow must be smooth e c : The error in the optical flow constraint equation must be small –Iteratively minimize e s + k e c cf) Horn, p284

COMP 290 Computer Vision - Spring Filling in Optical Flow Information Results –Horn, Chapter 12, p –Horn & Schunck

COMP 290 Computer Vision - Spring Discontinuities in Optical Flow If the scene is composed of multiple objects, there are discontinuities in the optical flow. –We need discontinuity information (= object segmentation) to refine optical flow. –On the other hand, we need optical flow to find discontinuities. –Solution : iteratively refine both segmentation and optical flow

COMP 290 Computer Vision - Spring Validity of Optical Flow –The optical flow equation assumes that image brightness remains constant. Is that valid? Ref) Trucco, p194 –Even with simple Lambertian reflectance, image brightness is constant only in case of pure translation, or when the illumination direction is parallel to the angular velocity (i.e. the axis of rotation).

COMP 290 Computer Vision - Spring Validity of Optical Flow –Therefore, in general, the optical flow is almost always different from the motion field! –The error is small if image gradient is high.

COMP 290 Computer Vision - Spring Feature-based Techniques –Ref) Trucco, p –They only get ‘sparse motion field’ - motion vectors are known only at feature points. Two-frame method : Feature matching How to find features? Use optical flow algorithm (least square fit) - if the calculated optical flow v is confident enough (i.e., if its covariance matrix is smaller enough), we consider it a feature point.

COMP 290 Computer Vision - Spring Feature Matching (continued) Algorithm Initially set ‘displacement field’ using optical flow alrogithm. For each feature point p in image 1, –1. ‘Warp’ its neighborhood Q1 according to current displacement vector in order to get Q’. –2. From Q’ and corresponding section Q2 of image B, find optical flow at p (= new displacement vector) and image difference between Q’ and Q2. –3. If image difference is below threshold, exit. Otherwise go to step 1.

COMP 290 Computer Vision - Spring Feature Tracking Multiple-frame Method : Feature Tracking –similar to feature matching, iterated between frame 0 & 1, and then 1 & 2, and then 2 & 3, and so on... –use knowledge of prior frames to estimate the position of feature points at next frame cf) Trucco, p201 - see the uncertainty (white cross) decreasing cf) Trucco, p203 - correspondence problem

COMP 290 Computer Vision - Spring D Motion & Structure from Sparce Motion Field –Feature-based technique gives us only sparse motion field : we have to extract information about (dense) 3-D motion field & 3-D structure from it! –Factorization method : If the camera model is orthographic, and the motion is rigid, the (2N * n) matrix of n feature points at N frames has at most rank 3 : huge intercorrelation! Use SVD to calculate 3D motion & structure.

COMP 290 Computer Vision - Spring Motion of Rigid Objects –Ref) Trucco, Chapter 8.2 –‘Any’ rigid body motion, at a given instant, can be decomposed into translation & rotation with respect to a given point. Ex) a rolling ball

COMP 290 Computer Vision - Spring Motion of Rigid Objects –In particular, a rigid body motion can be decomposed into: translation rotation about the origin in the camera reference frame –cf) Trucco, p183 –From now on, we will mean by ‘rotation’ a rotation defined like above.

COMP 290 Computer Vision - Spring Motion Parallax –Imagine two points instantaneously coincident in the image coordinate. –In general, their apparent motion (motion field) doesn’t have to be the same. –But the difference in their apprent motion will always point toward the epipole! epipole : vanishing point of the motion field if there were no rotation cf) Trucco, p

COMP 290 Computer Vision - Spring –If we assume the motion field is continuous, we can use adjacent points instead of coinciding points to calculate direction to epipole. –A pair of adjacent points determines a line intersecting epipole - with many pairs, we can use least square fit to find epipole. –Once we find the epipole, we can calculate angular velocity : then we have 3D motion. 3-D Motion & Structure from Dense Motion Field (sketch)