Vehicle Movement Tracking

Slides:



Advertisements
Similar presentations
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Advertisements

Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Dynamic Occlusion Analysis in Optical Flow Fields
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Contents Description of the big picture Theoretical background on this work The Algorithm Examples.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Feature extraction: Corners and blobs
Object Detection and Tracking Mike Knowles 11 th January 2005
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Highlights Lecture on the image part (10) Automatic Perception 16
Fitting a Model to Data Reading: 15.1,
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Scale Invariant Feature Transform (SIFT)
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
College of Engineering and Science Clemson University
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Human-Computer Interaction Human-Computer Interaction Tracking Hanyang University Jong-Il Park.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Local invariant features Cordelia Schmid INRIA, Grenoble.
3D SLAM for Omni-directional Camera
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
DETECTION AND CLASSIFICATION OF VEHICLES FROM A VIDEO USING TIME-SPATIAL IMAGE NAFI UR RASHID, NILUTHPOL CHOWDHURY, BHADHAN ROY JOY S. M. MAHBUBUR RAHMAN.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CSCE 643 Computer Vision: Extractions of Image Features Jinxiang Chai.
Tracking CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
DETECTING AND TRACKING TRACTOR-TRAILERS USING VIEW-BASED TEMPLATES Masters Thesis Defense by Vinay Gidla Apr 19,2010.
Feature extraction: Corners and blobs. Why extract features? Motivation: panorama stitching We have two images – how do we combine them?
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Features Jan-Michael Frahm.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
Keypoint extraction: Corners 9300 Harris Corners Pkwy, Charlotte, NC.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. A scalable platform for learning and evaluating a real-time vehicle detection system.
776 Computer Vision Jan-Michael Frahm Spring 2012.
SEMINAR ON TRAFFIC MANAGEMENT USING IMAGE PROCESSING by Smruti Ranjan Mishra (1AY07IS072) Under the guidance of Prof Mahesh G. Acharya Institute Of Technology.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Signal and Image Processing Lab
Chauffeur Shade Alabsa.
Paper – Stephen Se, David Lowe, Jim Little
Contents Team introduction Project Introduction Applicability
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Motion and Optical Flow
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Vehicle Segmentation and Tracking in the Presence of Occlusions
Vehicle Segmentation and Tracking from a Low-Angle Off-Axis Camera
Presented by: Cindy Yan EE6358 Computer Vision
Effective and Efficient Detection of Moving Targets From a UAV’s Camera
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Presented by Xu Miao April 20, 2005
Presentation transcript:

Vehicle Movement Tracking Written by Goldberg Stanislav for Vision Topics Seminar

Present Day Traffic Management Magnetic Loop Detectors Video Monitoring Systems

Magnetic Loop Detectors http://micro.magnet.fsu.edu/electromag/java/detector/index.html

Magnetic Loop Detectors Pros: Accurate counting Stable under different lighting and traffic conditions Cons: Costly : require digging up the road surface Unable to provide additional traffic parameters

Video Monitoring Systems

Video Monitoring Systems Pros: Vehicle counts and speeds Vehicle classification: { Bus, Truck, Bike, Car } Lane changes Acceleration/Deceleration Queue length for traffic jams Less costly to install then magnetic loop detectors Cons: Problems with congestion ( vehicles occlusion ) Long shadows linking vehicles together Transition between day and night

Tracking Requirements Automatic segmentation of a vehicle from a background and other vehicles so there can be a unique track associated with each vehicle Deal with variety of vehicles – motorcycles, passenger cars, buses, construction equipment, trucks, etc. Deal with a range of traffic conditions – light midday traffic, rush-hour congestion, varying speeds in different lanes. Deal with variety of lighting conditions – day, evening, night , sunny, overcast, rainy days. Real-time operation of the system

Vehicle Tracking Approaches 3D Model based Region based Active contour based Feature based

3D Model based tracking Image view is aligned with a detailed 3D model of each vehicle Pros: Easy vehicle classification { Bus, Truck, Bike ..} Cons: Memory and processing consuming approach Unrealistic to expect to be able to have detailed models for all vehicles that are found on the roadway

Region based tracking Pros: Every connected region in the image – “a blob” associated with each vehicle and then tracked over time using cross correlation. The “blobs” are found by the means of background extraction . Pros: Works well in free flowing traffic conditions Cons: Partial occlusion under congested traffic conditions leads to grouping of several vehicles into one “blob”

Active contour based tracking Representing vehicle by bounding contour of the object and dynamically update it during the tracking Pros: Reduced computational complexity compared to region based approach Cons: Partial occlusion is still a problem.

Feature based tracking Tracking not a object as a whole, but sub-features such as distinguished points or lines on the object. Pros: Partial occlusion is not a problem: some of the sub futures remains visible Cons: Additional problem to solve : which set of sub features belong to one object (grouping)

Motion Based Grouping Based on a common motion constraint aka “common fate” : sub-features that are moving rigidly are grouped together into a single vehicle. The grouping must be sensitive enough to pick up even slight acceleration or lane drift to distinguish a vehicle from the neighbors Spatial proximity (sub-features spatially close one to another ) must be taken into consideration

Motion Based Grouping: Why it’s good for vehicle tracking? For congested traffic vehicles are constantly changing their velocities to adjust to near by traffic, thus giving the grouper the information it needs to perform the segmentation. For free flowing traffic, vehicles more likely to maintain constant speed with almost no lane drift, making “common fate” grouping less useful, but there is more space between vehicles.

The Algorithm ( David Beymer et al ) Off-line camera definition Features detection Features tracking Features grouping Obtaining traffic parameters Vehicle Classification

The Algorithm

Off-line camera definitions (1) Line correspondence for the homography A projective transform H , or homography, is used to map from image coordinates (x,y) to world coordinates (X,Y)

Off-line camera definitions (1) Line correspondence for the homography H – linear transformation { rotation, scaling, shear, reflection } (x,y) rotated clockwise by theta angle and translated by (x0,y0), no scaling

Off-line camera definitions(2) Detection regions Stop Detection Area Start Detection Area

Off-line camera definitions (3) Fiducial points for camera stabilization

Features detection Corner features are chosen as sub-features Corner detector is based on image gradient

Features detection Horizontal ( x axis) differentiation can be approximated by Vertical ( y axis ) © stolen from Hagit’s Image Processing slides

Features detection

Features tracking: Kalman Filter Predictor The position and velocity of the vehicle is described by the linear state space: We assume that between the (k − 1)th and kth timestep the vehicle undergoes a constant acceleration of ak that is normally distributed, with mean 0 and standard deviation σa. From Newton laws of motion we conclude that:

Features tracking Kalman filtering predicts the area where to search for each corner feature Corner feature is found The distance between predicted and measured feature is computed If the distance is above threshold the track is rejected

Features tracking

Grouping The central principle: common motion When sub-feature is detected , it initially connected to all neighboring tracks within certain radius For all joined pairs of tracks pa(t), pb(t), relative displacement d(t)= pa(t)- pb(t) is calculated For each frame d value is computed for each edge and edge is broken if either maxtdx(t)- mintdx(t)> thresholdx maxtdy(t)- mintdy(t)> thresholdy Shadow sub-features tend to be unstable over time, so grouping eliminates them.

Grouping

Grouping : problems Oversegmentation one vehicle is grouped into several segments Overgrouping several vehicles are grouped into one segment

Real Time: hardware

Real Time: hardware PC – Intel Pentium 150 Mhz 160 MFLOPS, 260 MIPS C40 – TMS320C40, Texas Instruments Floating-Point Digital Signal Processor 40 MFLOPS, 20 MIPS Present PC - Intel Core i7-920 Quad Core Processor 63000 MFLOPS, 76000 MIPS

Results Length in min:sec G – number of reported vehicles (by algorithm ) N – number of actual vehicles (counted by human)

Results True match: A one to one match between ground truth and a group False negative: An unmatched ground truth Oversegmentation: A ground truth that is matched by more then one group False positive: An unmatched group Overgrouping: A group that matches more then one ground truth

Results Start End

Results: flow scatter plot

Results: velocity scatter plot

The end