Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.
Learning Techniques for Video Shot Detection Under the guidance of Prof. Sharat Chandran by M. Nithya.
Facial feature localization Presented by: Harvest Jang Spring 2002.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
1 Static Sprite Generation Prof ︰ David, Lin Student ︰ Jang-Ta, Jiang
Face Detection: a Survey Speaker: Mine-Quan Jing National Chiao Tung University.
Example-Based Color Transformation of Image and Video Using Basic Color Categories Youngha Chang Suguru Saito Masayuki Nakajima.
ART: Augmented Reality Table for Interactive Trading Card Game Albert H.T. Lam, Kevin C. H. Chow, Edward H. H. Yau and Michael R. Lyu Department of Computer.
Using spatio-temporal probabilistic framework for object tracking By: Guy Koren-Blumstein Supervisor: Dr. Hayit Greenspan Emphasis on Face Detection &
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
A Probabilistic Framework for Video Representation Arnaldo Mayer, Hayit Greenspan Dept. of Biomedical Engineering Faculty of Engineering Tel-Aviv University,
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
A REAL-TIME VIDEO OBJECT SEGMENTATION ALGORITHM BASED ON CHANGE DETECTION AND BACKGROUND UPDATING 楊靜杰 95/5/18.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Face Recognition and Retrieval in Video Basic concept of Face Recog. & retrieval And their basic methods. C.S.E. Kwon Min Hyuk.
ICBV Course Final Project Arik Krol Aviad Pinkovezky.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
Chapter 10, Part IV.  Definitions: ◦ Let M 1, M 2 ….M R be sets denoting the coordinates of the points in the regional. ◦ Let C(M i ) be a set denoting.
Knowledge Systems Lab JN 9/10/2002 Computer Vision: Gesture Recognition from Images Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
Body Expression of Emotion (BEE)
Babol university of technology Presentation: Alireza Asvadi
Elad Hadar Omer Norkin Supervisor: Mike Sumszyk Winter 2010/11, Single semester project. Date:22/4/12 Technion – Israel Institute of Technology Faculty.
ECE532 Final Project Demo Disparity Map Generation on a FPGA Using Stereoscopic Cameras ECE532 Final Project Demo Team 3 – Alim, Muhammad, Yu Ting.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
© 2008 Chen Hao, Beijing Institute of Technology 1 Intelligent Parking System: Parking Guide Application in Beijing and Method for License Plate Localization.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
Intruder Alert System By: Jordan Tymburski Rachita Bhatia.
0 - 1 © 2007 Texas Instruments Inc, Content developed in partnership with Tel-Aviv University From MATLAB ® and Simulink ® to Real Time with TI DSPs Edge.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
Driver’s Sleepiness Detection System Idit Gershoni Introduction to Computational and Biological Vision Fall 2007.
MOTION ESTIMATION IMPLEMENTATION IN RECONFIGURABLE PLATFORMS
Dynamic Captioning: Video Accessibility Enhancement for Hearing Impairment Richang Hong, Meng Wang, Mengdi Xuy Shuicheng Yany and Tat-Seng Chua School.
A study on face system Speaker: Mine-Quan Jing National Chiao Tung University.
Media Processor Lab. Media Processor Lab. High Performance De-Interlacing Algorithm for Digital Television Displays Media Processor Lab.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
Implementation, Comparison and Literature Review of Spatio-temporal and Compressed domains Object detection. By Gokul Krishna Srinivasan Submitted to Dr.
By: David Gelbendorf, Hila Ben-Moshe Supervisor : Alon Zvirin
Motion Detection and Processing Performance Analysis Thomas Eggers, Mark Rosenberg Department of Electrical and Systems Engineering Abstract Histograms.
Presented by: Idan Aharoni
Technion- Israel Institute of Technology Faculty of Electrical Engineering CCIT-Computer Network Laboratory The Influence of Packet Loss On Video Quality.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
Wire Detection Version 2 Joshua Candamo Friday, February 29, 2008.
SUREILLANCE IN THE DEPARTMENT THROUGH IMAGE PROCESSING F.Y.P. PRESENTATION BY AHMAD IJAZ & UFUK INCE SUPERVISOR: ASSOC. PROF. ERHAN INCE.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
Biosimilar (Insulin) – Competitive Landscape and Market & Pipeline Analysis, 2016 DelveInsight’s, “Biosimilar (Insulin) – Competitive Landscape and Market. Request for sample of this research report:
A Hybrid Edge-Enhanced Motion Adaptive Deinterlacer By Marc Ramirez.
Introduction To Computational and Biological Vision Max Binshtok Ohad Greenshpan March 2006 Shot Detection in video.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Tobias Kohoutek Institute of Geodesy and Photogrammetry Geodetic Metrology and Engineering Geodesy ANALYSIS AND PROCESSING OF 3D-IMAGE-DATA FOR ROBOT MONITORING.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Computational Controlled Mode Selection for H.264/AVC June Computational Controlled Mode Selection for H.264/AVC Ariel Kit & Amir Nusboim Supervised.
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Motion Detection And Analysis
A Framework for Automatic Resource and Accuracy Management in A Cloud Environment Smita Vijayakumar.
IMAGE MOSAICING MALNAD COLLEGE OF ENGINEERING
Presented by :- Vishal Vijayshankar Mishra
Smita Vijayakumar Qian Zhu Gagan Agrawal
Research Institute for Future Media Computing
Presentation transcript:

Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University of the Negev Department of Communication Systems Engineering

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Motivation National security is a matter of high priority. Main method of security – cameras. – Disadvantage – requirement for man power. A better method is required.

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Our System An automatic security system. –Stationary camera. Goal – detect, track and recognize a moving human object in a video sequence. –Each phase is implemented as a separate algorithm. Simulation in Matlab.

Hall Monitor

A General View on the System Detect Recognize Track State machine : Tracking successful? YesNo

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Object Detection Find edge map of difference image Find moving object edge ME n for frame n Delay Previous gray level image Current gray level image ME n-1 Use Canny edge detector to find all edge maps. Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output ME n-1. EnEn EbEb ME n DE n

Object Detection (Cont.) Extracted background edge map Background frame

Object Detection (Cont.) Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output Me n-1. Difference image contains the moving parts of the frame. Find edge map of difference image Find moving object edge ME n for frame n Delay Previous gray level image Current gray level image ME n-1 EnEn EbEb ME n DE n

Object Detection (Cont.) Current frame Edges extracted from the difference image Edges extracted from the original frame

Object Detection (Cont.) Use four images: –Background edge map E b. –Difference edge map DE n. –Current edge map E n. –Previous algorithm output ME n-1. Find pixels that belong to the moving parts of the object. Find pixels that belong to the still parts of the object. Combine the two components.

Detection Result Final edge map

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Object Tracking Input: –Current edge map –Previous edge map Goal - estimate object’s location in the next frame. The estimation is performed for several frames, until tracking fails. In that case, the object is re- detected.

Object Tracking (Cont.) Divide the previous edge map into square blocks. Use thresholding to determine which blocks contain parts of objects. Empty blockBlock containing object

Object Tracking (Cont.) Locate the blocks containing objects from the previous edge map in the current edge map: –To match the blocks, we use correlation. previous Current Matched x =

Correlation Statistics

Previous edge map Previous edge map with blocks Current edge map with matched blocks Matching Results

Object Tracking (Cont.) For each matched block, calculate the amount of pixels it moved in each axis. Calculate the average E and standard deviation, and divide into five ranges. E

Object Tracking (Cont.) Estimate the location of each block in the next edge map according to the corresponding speed. Verify true location of each block using correlation.

Tracking Statistics Threshold

Tracking Results Current, next and following frames, double estimation.

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Environment Actors Equipment Static Dynamic Size Motion Interaction Static Dynamic Scenario Analysis

Recognition Monitor amount of objects Recognize behavior of human object –Size – based on detected edges Size indicates motion inwards or outwards –Motion – based on tracked blocks Recognize sudden changes in speed as suspicious –Interaction with environment – based on skeleton Recognize suspicious postures.

Monitoring Amount of Objects

Locate center of mass –Using: Find end points of limbs Mark spine and limbs location Calculate angles between spine and limbs to recognize the posture Skeleton Construction

Outline Motivation Our System Phase 1: Detection Phase 2: Tracking Phase 3: Recognition Result Video

Separated Objects Demo

Click to edit Master title style Click to edit Master subtitle style 32 Thank You!

Freq. Domain Correlation If there is a match between an image f and an object h, the correlation will be maximal at the location of h in f. Correlation is given by: When: denotes the complex conjugate of f.

Mathematical Background Difference edge map: Pixels that belong to the moving parts of the object : Pixels that belong to the still parts of the object : Result: