Nick Hirsch: Progress Report. Track features from frame to frame using KLT Calculate their optical flow. Determine Focus of Expansion The center that.

Slides:



Advertisements
Similar presentations
Clustering Basic Concepts and Algorithms
Advertisements

Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Nick Hirsch: Progress Report. Track Features (KLT) Determine Direction of Tracked Points Compare Tracks to FoE Direction Field Update FoE Stabilize Tracks.
Cynthia Atherton.  Methodology  Code  Results  Problems  Plans.
Nick Hirsch: Progress Report. Track Features (SIFT) Determine Direction of Tracked Points Compare Tracks to FoE Direction Field Update FoE Cluster (Mean-Shift)
Lecture 8: Stereo.
The loss function, the normal equation,
Or where do the pixels move? Alon Gat. Given: two or more frames of an image sequence Wanted: Displacement field between two consecutive frames  optical.
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Segmentation Divide the image into segments. Each segment:
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Linear Methods, cont’d; SVMs intro. Straw poll Which would you rather do first? Unsupervised learning Clustering Structure of data Scientific discovery.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
1 Motion Analysis Mike Knowles January 2006.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
1Jana Kosecka, CS 223b EM and RANSAC EM and RANSAC.
Motion Detection in UAV Videos by Cooperative Optical Flow and Parametric Analysis Masaharu Kobashi.
MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Automatic Camera Calibration
Image Stitching Ali Farhadi CSE 455
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
ADVANCED CLASSIFICATION TECHNIQUES David Kauchak CS 159 – Fall 2014.
Motion estimation Digital Visual Effects Yung-Yu Chuang with slides by Michael Black and P. Anandan.
1 Computational Vision CSCI 363, Fall 2012 Lecture 31 Heading Models.
The Measurement of Visual Motion P. Anandan Microsoft Research.
PFI Cobra/MC simulator Peter Mao. purpose develop algorithms for fiducial (FF) and science (SF) fiber identification under representative operating conditions.
1 Learning Chapter 18 and Parts of Chapter 20 AI systems are complex and may have many parameters. It is impractical and often impossible to encode all.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
Uses of Motion 3D shape reconstruction Segment objects based on motion cues Recognize events and activities Improve video quality Track objects Correct.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Counting Crowded Moving Objects Vincent Rabaud and Serge Belongie Department of Computer Science and Engineering University of California, San Diego
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow Ming Ye Electrical Engineering, University of Washington
= 5 = 2 = 4 = 3 How could I make 13 from these shapes? How could I make 19 from these shapes? STARTER.
1.Optical Flow 2.LogPolar Transform 3.Inertial Sensor 4.Corner Detection 5. Feature Tracking 6.Lines.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Motion estimation Digital Visual Effects, Spring 2006 Yung-Yu Chuang 2005/4/12 with slides by Michael Black and P. Anandan.
Motion estimation Parametric motion (image alignment) Tracking Optical flow.
Motion estimation Digital Visual Effects, Spring 2005 Yung-Yu Chuang 2005/3/23 with slides by Michael Black and P. Anandan.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
DIGITAL SIGNAL PROCESSING
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Digital Visual Effects Yung-Yu Chuang
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
Overview Accomplishments Automatic Queen selection Side by Side Tracks
Optical flow Computer Vision Spring 2019, Lecture 21
Week 7 REU Nolan Warner.
Presentation transcript:

Nick Hirsch: Progress Report

Track features from frame to frame using KLT Calculate their optical flow. Determine Focus of Expansion The center that all other points seem to expand from when moving forward. Determine which points aren’t expanding outward from the Focus of Expansion. Track Features (KLT) Calculate Optical Flow Compare to FoE Direction Field Update FoE

As you can see, the pixels belonging to stationary objects move along the lines emanating from the FoE.

Green Arrows = Moving Red Arrows = Stationary

 Jan Prokaj handed his code over to me.  Attempted to reproduce Jan’s results. ◦ His algorithm relies on the video being jitter free. ◦ His estimate of the Focus of Expansion was highly inaccurate.

Blue Square – Current estimate of Focus of Expansion Yellow Square – Smoothed estimate of Focus of Expansion

 We can assume most objects in the given video are stationary and their pixels expand outwards from the FoE.  To find the true FoE(yellow) given the optical flow vectors(black), start with a hypothesized FoE(green), generate some neighbor FoEs, calculate their energy, and update the hypothesis FoE.

 We assume all pixels should be traveling radially outward from the Focus of Expansion.  To calculate an updated FoE, we want to minimize the SSD of the direction that feature F i should be traveling.

 w i – The magnitude of the feature point I ◦ I use this weight because features with larger magnitudes are often more accurate.  Theta FoE,i – The direction of the FoE field at i  Theta i – The direction of the feature point

 Another way to find the FoE is to try to solve for the intersecting point of all the optical flow vectors.  To do this, I convert the optical flow vectors into homogenous lines, creating an Nx3 matrix, then I perform the Singular Value Decomposition on it. The resulting right vector of the Nx3 matrix gives me the point where the lines all intersect.

 To further improve my estimation of the FoE, I’m going to use RANSAC to weed out the outliers amongst the optical flow vectors.  Next, I’d like to determine a good method for clustering vectors classified as moving.