V4 – Video Tracker for Extremely Hard Night Conditions

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Object Specific Compressed Sensing by minimizing a weighted L2-norm A. Mahalanobis.
EE4H, M.Sc Computer Vision Dr. Mike Spann
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Fitting a Model to Data Reading: 15.1,
Robust Real-Time Object Detection Paul Viola & Michael Jones.
Tracking a maneuvering object in a noisy environment using IMMPDAF By: Igor Tolchinsky Alexander Levin Supervisor: Daniel Sigalov Spring 2006.
February 2001SUNY Plattsburgh Concise Track Characterization of Maneuvering Targets Stephen Linder Matthew Ryan Richard Quintin This material is based.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Xinqiao LiuRate constrained conditional replenishment1 Rate-Constrained Conditional Replenishment with Adaptive Change Detection Xinqiao Liu December 8,
המעבדה למערכות ספרתיות מהירות High speed digital systems laboratory הטכניון - מכון טכנולוגי לישראל הפקולטה להנדסת חשמל Technion - Israel institute of technology.
Overview and Mathematics Bjoern Griesbach
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Rake Reception in UWB Systems Aditya Kawatra 2004EE10313.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Tracking with CACTuS on Jetson Running a Bayesian multi object tracker on a low power, embedded system School of Information Technology & Mathematical.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Tracking with CACTuS on Jetson Running a Bayesian multi object tracker on an embedded system School of Information Technology & Mathematical Sciences September.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Section 8-5 Testing a Claim about a Mean: σ Not Known.
3DFM – Agnostic Tracking of Bead Position CISMM: Computer Integrated Systems for Microscopy and Manipulation Project Investigators: Kalpit Desai, Dr. Gary.
23 November Md. Tanvir Al Amin (Presenter) Anupam Bhattacharjee Department of Computer Science and Engineering,
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Su-ting, Chuang 2010/8/2. Outline Introduction Related Work System and Method Experiment Conclusion & Future Work 2.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
Su-ting, Chuang 1. Outline Introduction Related work Hardware configuration Detection system Optimal parameter estimation framework Conclusion 2.
Lecture 8 Source detection NASSP Masters 5003S - Computational Astronomy
CSE 6367 Computer Vision Image Operations and Filtering “You cannot teach a man anything, you can only help him find it within himself.” ― Galileo GalileiGalileo.
SUB-NYQUIST DOPPLER RADAR WITH UNKNOWN NUMBER OF TARGETS A project by: Gil Ilan & Alex Dikopoltsev Guided by: Yonina Eldar & Omer Bar-Ilan Project #: 1489.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Complexity varying intra prediction in H.264 Supervisors: Dr. Ofer Hadar, Mr. Evgeny Kaminsky Students: Amit David, Yoav Galon.
Dr. Ofer Hadar Communication Systems Engineering Department
Generalized and Hybrid Fast-ICA Implementation using GPU
Adnan Quadri & Dr. Naima Kaabouch Optimization Efficiency
핵심어 검출을 위한 단일 끝점 DTW 알고리즘 Yong-Sun Choi and Soo-Young Lee
Tracking Objects with Dynamics
Outlier Processing via L1-Principal Subspaces
Velocity Estimation from noisy Measurements
Fast and Robust Object Tracking with Adaptive Detection
Image Processing for Physical Data
Range Imaging Through Triangulation
Regression-Based Prediction for Artifacts in JPEG-Compressed Images
Edge Detection The purpose of Edge Detection is to find jumps in the brightness function (of an image) and mark them.
הפקולטה להנדסת חשמל - המעבדה לבקרה ורובוטיקה גילוי תנועה ועקיבה אחר מספר מטרות מתמרנות הטכניון - מכון טכנולוגי לישראל TECHNION.
Persistent Surveillance
Sum of Absolute Differences Hardware Accelerator
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Digital Image Processing
Spatial operations and transformations
Persistent Surveillance
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Scalable light field coding using weighted binary images
Discriminative Training
Nome Sobrenome. Time time time time time time..
Report 2 Brandon Silva.
Spatial operations and transformations
Presentation transcript:

V4 – Video Tracker for Extremely Hard Night Conditions December 2011 Omer Cohen Ophir Gozes Davidov Gavriel

Table Of Contents The challenge Demonstration videos Usual solutions Our solution Basic Assumptions for Tracking Block Diagram Performance examination Conclusion

The Challenge Pointing and tracking a maneuvering target. No prior knowledge of the target’s shape. Dealing with hiding objects – Position estimation Dealing with very low SNR videos. Display velocity and acceleration of the tracked target.

Demonstration Videos – FLIR Human FLIR video of a human crawling at night. Very low SNR. Watch video

Demonstration Videos – Maneuvering Super-Hornet Night video of a maneuvering Super-Hornet. Very low SNR. Watch video

Demonstration Videos – Severely Maneuvering Particle A Severely maneuvering particle. High speed. Accelerating. Hiding object Low SNR. Watch video

Combined Scenarios Combining all 3 Scenarios into one movie, one after another. The algorithms may cope with the scenario changes and identify the target automatically.

Usual Solutions Center of mass / Image Processing Finding the maximal value at the frame: Wrong, if working with a low SNR video is required. A pre-processing filtering is required. Correlation – with a prior known object Limited for dealing with certain known objects.

Our Solution Combines 2 methods of objects tracking. Each of them with his own advantages. Correlation of 2-4 following frames, based on (Quantitative Comparison of Algorithms for Tracking Single Fluorescent Particles) Cheezum,Walker,2001 Ideal for very low SNR videos. Limited for objects at medium velocity. Image Processing Peak Search Deals with high velocities at higher SNR.

Correlating Following Frames We’ve based our solution on a method of Correlating Following Frames Based on “name of an article” Frame x+1 Frame x X

Correlating Following Frames (2) We’ve implemented a 4-frame correlator Better sensitivity (ADD THE CALCULATION) Less fitted for tracking accelerating targets Frame x+3 Frame x+2 Frame x+1 X Frame x X X

4 Frame Correlator Demonstration The 2 frames correlation matrix before and after filtering. The peak cannot be detected. Example: a rectangle at a noisy frame. Very low SNR. The object in hardly noticed. The 4 frames correlation matrix before and after filtering. The peak is clearly detected.

The Correlation Matrix Out of the correlation matrix one can calculate the object’s velocity in each axis. However, the correlation matrix doesn’t give object’s location. Center Of Frame Frame x+1 Frame x X y Velocity X Velocity Max Correlation Point

Tracking Controller - Correlation In order to determine object’s position by the correlation method, we need to produce some spatial information about the object: Minimize the frame into 4 sub-frames. Correlate each 4 sub-frames with his following sub-frames. Choose “best” correlation matrix. Begin a new iteration. Frame x+1 Frame x Up Left Right X Down

Low-Pass: Convolution with a nXn square Image Processing Because of the noisy frames we perform a simple image processing: filtering with a Low-Pass filter. Then we can clearly find maximum values. Low-Pass: Convolution with a nXn square

Tracking Controller The purpose is to combine our tracking methods (correlation of previous frames & center of mass) into one efficient algorithm, bringing each method’s advantages. New frame arrived if( first frame in the movie || certainty (last frame) < threshold || velocity (last frame) > threshold Determine object’s position by IP Determine object’s position by correlation result Decide if object is present in the correlation frame Certainty > threshold -> Object estimated position Kalman Filter Correlate using new frame

Basic Assumptions for tracking IP Algorithm Thr. Correlation Algorithm Thr. Parameter --- 0.5 X Object’s width (pixels) / Frame Velocity 1 0.3 SNR = for Wobject ~ 0.05 X Wframe Fairly Symmetric wObject< wFrame Object’s Shape

Dealing With Hiding Targets In order to deal with hiding targets, we should be able to filter frames in which the object is absent. We use a 4 dimension Kalman filter (Xx,Xy,Vx,Vy) to filter the samples. The certainty generated by the correlation algorithm is used as the samples noise. Watch video

Performance Examination Algorithm result for combined videos: Watch video Algorithm Result for combined videos (2): Algorithm result - hiding objects:

Low Snr Scenario at Low speed Correlation is suprerior Image Processing False detections are evident

Image Processing Is superior SNR =1 at High Speed Image Processing Is superior Correlation Lock lost at Velocity>0.7square_width Image Processing

Block Diagram Main Function MainControl SearchParams mainusemaincontrolv4f Parameter initalization, Output Video Creation MainControl mainusemaincontrolv4f SearchParams Determines The next correlation window Multi Frame Correlator mfmulticorrelatorv4 Performs 5 window 4 frame correlation Embeds detection Markers on the Output Video . PlotResults

Algorithm’s Weaknesses The method of correlating previous frames is limited to deal with low-medium velocities (computational complexity). In case of higher velocities, IP algorithm is limited in SNR threshold. Because of the complexity of the correlation method, processing time may become critical.

Looking Forward Improving correlation of previous frames method to reduce calculation time. Implementation on hardware – achieve real time performance.

Bibliography Quantitative Comparison of Algorithms for Tracking Single Fluorescent Particles Michael K. Cheezum, William F. Walker, and William H. Guilford Department of Biomedical Engineering, University of Virginia, Charlottesville, Virginia 22908 USA 2001