CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.

Slides:



Advertisements
Similar presentations
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
Advertisements

Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Segmentation Divide the image into segments. Each segment:
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
CSSE463: Image Recognition Day 31 Due tomorrow night – Project plan Due tomorrow night – Project plan Evidence that you’ve tried something and what specifically.
Motion Computing in Image Analysis
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Optical Flow Estimation
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
1 Stanford CS223B Computer Vision, Winter 2006 Lecture 7 Optical Flow Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado Slides.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
06 - Boundary Models Overview Edge Tracking Active Contours Conclusion.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
Computer Vision - A Modern Approach Set: Tracking Slides by D.A. Forsyth The three main issues in tracking.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Edges.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Motion Estimation I What affects the induced image motion?
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
CSSE463: Image Recognition Day 25 Today: introduction to object recognition: template matching Today: introduction to object recognition: template matching.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Non-linear filtering Example: Median filter Replaces pixel value by median value over neighborhood Generates no new gray levels.
CSSE463: Image Recognition Day 29
COMP 9517 Computer Vision Motion and Tracking 6/11/2018
Motion and Optical Flow
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Image Primitives and Correspondence
CSSE463: Image Recognition Day 25
Range Imaging Through Triangulation
CSSE463: Image Recognition Day 29
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 25
Announcements Questions on the project? New turn-in info online
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 25
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Nome Sobrenome. Time time time time time time..
Presentation transcript:

CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope to accomplish. Evidence that you’ve tried something and what specifically you hope to accomplish. This week This week Today: motion vectors, and tracking Today: motion vectors, and tracking Thursday: Topic du jour Thursday: Topic du jour Friday: Project workday Friday: Project workday Questions? Questions?

What is image flow? Notice that we can take partial derivatives with respect to x, y, and time. Notice that we can take partial derivatives with respect to x, y, and time.

Image flow equations Goal: to find where each pixel in frame t moves in frame t+  t Goal: to find where each pixel in frame t moves in frame t+  t E.g. for 2 adjacent frames,  t = 1 E.g. for 2 adjacent frames,  t = 1 That is,  x,  y are unknown That is,  x,  y are unknown Assume: Assume: Illumination of object doesn’t change Illumination of object doesn’t change Distances of object from camera or lighting don’t change Distances of object from camera or lighting don’t change Each small intensity neighborhood can be observed in consecutive frames: f(x,y,t)  f(x+  x, y+  y, t+  t) for some  x,  y (the correct motion vector). Each small intensity neighborhood can be observed in consecutive frames: f(x,y,t)  f(x+  x, y+  y, t+  t) for some  x,  y (the correct motion vector). Compute a Taylor-series expansion around a point in (x,y,t) coordinates. Compute a Taylor-series expansion around a point in (x,y,t) coordinates. Gives edge gradient and temporal gradient Gives edge gradient and temporal gradient Solve for (  x,  y) Solve for (  x,  y) See answers to first quiz question now See answers to first quiz question now

Limitations Assumptions don’t always hold in real-world images. Assumptions don’t always hold in real-world images. Doesn’t give a unique solution for flow Doesn’t give a unique solution for flow Sometimes motion is ambiguous Sometimes motion is ambiguous Look at last question on last quiz. Look at last question on last quiz. “Live demo” “Live demo”

Aperture problem When we only see a small part of the image, sometimes motion is ambiguous When we only see a small part of the image, sometimes motion is ambiguous Lesson: By constraining ourselves to a small neighborhood, we lose the “big picture” Lesson: By constraining ourselves to a small neighborhood, we lose the “big picture” Solutions: Solutions: Assume pixels belong to same object and have same motion Assume pixels belong to same object and have same motion Propagate constraints between pixels Propagate constraints between pixels Can still take many frames to converge Can still take many frames to converge Doesn‘t work at occlusions (hidden parts of object) Doesn‘t work at occlusions (hidden parts of object) Sensitive to outliers Sensitive to outliers Only track motion of “interesting points” Only track motion of “interesting points”

Interest point tracking Idea: corners and other easily-identifiable points can be tracked, and their motion is unambiguous. Idea: corners and other easily-identifiable points can be tracked, and their motion is unambiguous. Technique for calculating a set of interest points: Technique for calculating a set of interest points: Loop over image: Loop over image: Calculate an interest score for a small neighborhood around each pixel Calculate an interest score for a small neighborhood around each pixel If the score is above threshold, then add it to a point set to be returned. If the score is above threshold, then add it to a point set to be returned.

Stretch break When is your group willing to present? When is your group willing to present?

Interest scores One solution: One solution: Take minimum variance of those found in 4 directions: Take minimum variance of those found in 4 directions: (3x3 shown, but would use bigger window) (3x3 shown, but would use bigger window) Another related texture-based operator is based on directions with significant edge gradient. Another related texture-based operator is based on directions with significant edge gradient. Could also detect corners using Kirsch operators (similar to other edge operators). Could also detect corners using Kirsch operators (similar to other edge operators).

Tracking interest points Search for them directly using a template. Search for them directly using a template. Choose one with highest correlation Choose one with highest correlation Just need to search in small neighborhood if frame rate is high enough Just need to search in small neighborhood if frame rate is high enough Shapiro’s Alg. 9.3 Shapiro’s Alg. 9.3

Correlation Just the dot product between the template and a neighborhood in the image. Just the dot product between the template and a neighborhood in the image. Idea: high correlation when the template matches. Idea: high correlation when the template matches. Problem: always high correlation when matching with bright region Problem: always high correlation when matching with bright region Solution: Normalize each region by subtracting mean before taking dot product Solution: Normalize each region by subtracting mean before taking dot product

Matlab libraries exist to extract frames (still images) from video for processing. Matlab libraries exist to extract frames (still images) from video for processing. Demo of simple tracking application Demo of simple tracking application

Computing trajectories Problem: what if unlabeled moving points belong to multiple trajectories? Problem: what if unlabeled moving points belong to multiple trajectories? Idea: want to maximize smoothness, defined as change in direction and change of speed. Idea: want to maximize smoothness, defined as change in direction and change of speed. Given two points p t, p t+1, thought to be related, calculate v t = p t+1 – p t. Then: Given two points p t, p t+1, thought to be related, calculate v t = p t+1 – p t. Then: direction Average speed compared to geom. mean

Computing trajectories Then the “Greedy-Exchange” algorithm is used to choose which points belong to which trajectory. Then the “Greedy-Exchange” algorithm is used to choose which points belong to which trajectory. Operates by creating trajectories and exchanging points if it would increase total smoothness. Operates by creating trajectories and exchanging points if it would increase total smoothness. Local optimum. Local optimum.

Another approach The Kalman filter is a probabilistic model that combines noisy measurements with the expected trajectory of the object. It works even with occlusion. The Kalman filter is a probabilistic model that combines noisy measurements with the expected trajectory of the object. It works even with occlusion. See for a start. See for a start.

Ghost_outline.htm Ghost_outline.htm Ghost_outline.htm Ghost_outline.htm Thanks to Thomas Root for link Thanks to Thomas Root for link