CSSE463: Image Recognition Day 29

Slides:



Advertisements
Similar presentations
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Advertisements

1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
EE 7730 Image Segmentation.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
Optical Flow Methods 2007/8/9.
E.G.M. PetrakisDynamic Vision1 Dynamic vision copes with –Moving or changing objects (size, structure, shape) –Changing illumination –Changing viewpoints.
Segmentation Divide the image into segments. Each segment:
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Highlights Lecture on the image part (10) Automatic Perception 16
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Deriving Intrinsic Images from Image Sequences Mohit Gupta Yair Weiss.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Real-time object tracking using Kalman filter Siddharth Verma P.hD. Candidate Mechanical Engineering.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
3D Imaging Motion.
Expectation-Maximization (EM) Case Studies
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
Motion and optical flow
CSSE463: Image Recognition Day 29
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
COMP 9517 Computer Vision Motion and Tracking 6/11/2018
CSSE463: Image Recognition Day 21
TP12 - Local features: detection and description
Motion and Optical Flow
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Fast and Robust Object Tracking with Adaptive Detection
Optical Flow and Obstacle Avoidance
Representing Moving Images with Layers
Presented by: Cindy Yan EE6358 Computer Vision
Object tracking in video scenes Object tracking in video scenes
Representing Moving Images with Layers
What I learned in the first 2 weeks
Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Image and Video Processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 23
CSSE463: Image Recognition Day 29
Optical flow and keypoint tracking
Presentation transcript:

CSSE463: Image Recognition Day 29 This week Today: Surveillance and finding motion vectors Lab 7: due Wednesday Thursday: motion and tracking Friday: project workday in class, status report due in dropbox by 8 am. Questions?

Motion New domain: image sequences. Additional dimension: time Cases: Still camera, moving objects Detection, recognition Surveillance Moving camera, constant scene 3D structure of scene Moving camera, several moving objects Robot car navigation through traffic

Surveillance Applications: Stationary camera, moving objects Military Hospital halls during night Stationary camera, moving objects Separate background from objects Q1

Finding moving objects Subtract images What next… How could you use this to find moving objects? Discuss with a classmate Share with class Q2

Processing ideas Subtract images Mark those pixels that changed significantly (over threshold) Connected components. Fill? Toss small regions Morphological closing to merge neighboring regions Return bounding box

Issues with image subtraction Background model Simplest: previous frame General: find mean M and variance of many frames Consider the hospital hallway with a window How to handle “drift” due to illumination changes? For each pixel p with mean M: Mnew = aMold + (1-a)p Consider what happens when a person enters the scene Background model adapts to her What happens when she leaves? Mean changes, so detects background as foreground Variance remains high, so can’t detect new arrivals. Answer: multiple models

Motion vectors Difference in motion of specific objects Calculated for each pixel in the image Examples: pan, zoom. For combined pan + zoom, see: http://www.rose-hulman.edu/~boutell/projects/motionvector/ Developed by former CSSE463 student Francis Meng How to find? 2 techniques

What is image flow? Notice that we can take partial derivatives with respect to x, y, and time. dI/dt? dI/dx? DI/dy?

Image flow equations Goal: to find where each pixel in frame t moves in frame t+Dt E.g. for 2 adjacent frames, Dt = 1 That is, Dx, Dy are unknown Assume: Illumination of object doesn’t change Distances of object from camera or lighting don’t change Each small intensity neighborhood can be observed in consecutive frames: f(x,y,t)f(x+Dx, y+Dy, t+Dt) for some Dx, Dy (the correct motion vector). Compute a Taylor-series expansion around a point in (x,y,t) coordinates. Gives edge gradient and temporal gradient Solve for (Dx, Dy)

Limitations Assumptions don’t always hold in real-world images. Doesn’t give a unique solution for flow Sometimes motion is ambiguous “Live demo”