Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS.

Slides:



Advertisements
Similar presentations
CSCE643: Computer Vision Bayesian Tracking & Particle Filtering Jinxiang Chai Some slides from Stephen Roth.
Advertisements

Caroline Rougier, Jean Meunier, Alain St-Arnaud, and Jacqueline Rousseau IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 21, NO. 5,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Modeling Uncertainty over time Time series of snapshot of the world “state” we are interested represented as a set of random variables (RVs) – Observable.
Patch to the Future: Unsupervised Visual Prediction
Reducing Drift in Parametric Motion Tracking
A KLT-Based Approach for Occlusion Handling in Human Tracking Chenyuan Zhang, Jiu Xu, Axel Beaugendre and Satoshi Goto 2012 Picture Coding Symposium.
Oklahoma State University Generative Graphical Models for Maneuvering Object Tracking and Dynamics Analysis Xin Fan and Guoliang Fan Visual Computing and.
Temporal Video Denoising Based on Multihypothesis Motion Compensation Liwei Guo; Au, O.C.; Mengyao Ma; Zhiqin Liang; Hong Kong Univ. of Sci. & Technol.,
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Stanford CS223B Computer Vision, Winter 2007 Lecture 12 Tracking Motion Professors Sebastian Thrun and Jana Košecká CAs: Vaibhav Vaish and David Stavens.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Background Estimation with Gaussian Distribution for Image Segmentation, a fast approach Gianluca Bailo, Massimo Bariani, Paivi Ijas, Marco Raggio IEEE.
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
1 Activity and Motion Detection in Videos Longin Jan Latecki and Roland Miezianko, Temple University Dragoljub Pokrajac, Delaware State University Dover,
HMM-BASED PSEUDO-CLEAN SPEECH SYNTHESIS FOR SPLICE ALGORITHM Jun Du, Yu Hu, Li-Rong Dai, Ren-Hua Wang Wen-Yi Chu Department of Computer Science & Information.
INTRODUCTION  Sibilant speech is aperiodic.  the fricatives /s/, / ʃ /, /z/ and / Ʒ / and the affricatives /t ʃ / and /d Ʒ /  we present a sibilant.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Tracking using Particle Filter
Learning and Recognizing Human Dynamics in Video Sequences Christoph Bregler Alvina Goh Reading group: 07/06/06.
A General Framework for Tracking Multiple People from a Moving Camera
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
3D SLAM for Omni-directional Camera
International Conference on Intelligent and Advanced Systems 2007 Chee-Ming Ting Sh-Hussain Salleh Tian-Swee Tan A. K. Ariff. Jain-De,Lee.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
Using Inactivity to Detect Unusual behavior Presenter : Siang Wang Advisor : Dr. Yen - Ting Chen Date : Motion and video Computing, WMVC.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Stable Multi-Target Tracking in Real-Time Surveillance Video
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Expectation-Maximization (EM) Case Studies
Sparse Bayesian Learning for Efficient Visual Tracking O. Williams, A. Blake & R. Cipolloa PAMI, Aug Presented by Yuting Qi Machine Learning Reading.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Optimal Component Analysis Optimal Linear Representations of Images for Object Recognition X. Liu, A. Srivastava, and Kyle Gallivan, “Optimal linear representations.
PhD Candidate: Tao Ma Advised by: Dr. Joseph Picone Institute for Signal and Information Processing (ISIP) Mississippi State University Linear Dynamic.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Tracking with dynamics
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
General approach: A: action S: pose O: observation Position at time t depends on position previous position and action, and current observation.
Zhaoxia Fu, Yan Han Measurement Volume 45, Issue 4, May 2012, Pages 650–655 Reporter: Jing-Siang, Chen.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Ehsan Nateghinia Hadi Moradi (University of Tehran, Tehran, Iran) Video-Based Multiple Vehicle Tracking at Intersections.
C. Canton1, J.R. Casas1, A.M.Tekalp2, M.Pardàs1
Tracking Objects with Dynamics
Particle Filtering for Geometric Active Contours
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
Presented by: Yang Yu Spatiotemporal GMM for Background Subtraction with Superpixel Hierarchy Mingliang Chen, Xing Wei, Qingxiong.
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Wavelet-Based Denoising Using Hidden Markov Models
SIMPLE ONLINE AND REALTIME TRACKING WITH A DEEP ASSOCIATION METRIC
Nome Sobrenome. Time time time time time time..
Probabilistic Surrogate Models
Presentation transcript:

Tracking Pedestrians Using Local Spatio- Temporal Motion Patterns in Extremely Crowded Scenes Louis Kratz and Ko Nishino IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE,2012

Outline  Motivation  Introduction  Proposed method  Experimental results  Conclusion

Motivation Goal: tracking single or multiple pedestrians in crowd scenes Solve conventional tracking problems -Occlusion problem -Pedestrians move in of different directions -Appearance change

Introduction(1) Observe a phenomenon

Observation Small area of instantaneous motions tend to repeat -Temporal -Spatial

Introduction(2) Spatio-temporal motion pattern -Describe crowd motion -Build a Spatial and temporal statistical model -Use to predict movement of individuals

Spatio-temporal motion pattern t y x

3D gradient vector: Calculate the mean motion vector or build a statistical model at each cuboid

Introduction(3) Hidden Markov Model: -States are not directly visible -Compromise of three components observation probabilities ‚transition probabilities ƒinitial probabilities

Introduction(4) Posterior distribution: given confidence X find probability of parameters

Introduction(5) Particle filter: is a filter which can be used to predict next state -different from kalman filter:  Robust to non linear system and can handle non Gaussian noise -Measurement:

Proposed method

Flow chart

(a) Divide the training video into spatio-temporal cuboids and calculate motion vectors, and then build statistical model for each motion patterns (b) Train a collection of hidden Markov models (c) Use observed local motion patterns to predict the motion patterns at each location (d) Use this predicted motion patterns to trace individuals

Step (a)-statistical model for motion patterns 1.First we calculate the motion vector at each pixel by 3D gradient vector 2.Next we build a statistical model by 3D Gaussian distribution

3. Define the local spatio-temporal pattern at location n and frame t

Step (b)-train hidden Markov models 1. By clustering algorithm, divide motion patterns into S clusters 2. Define states{s=1,…,S},and S is the number of clusters 3. For a specific hidden state s, the probability of an observed motion pattern is: Calculate variance between two distributions

Step(c)- predict motion patterns Taking expected value of the predictive distribution: Solve by forwards-backwards algorithm Reference: [23] L. Rabiner, “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition,”Proc. IEEE,vol. 77, no. 2,pp , Feb

Step(d)-trace individuals Use particle filter maximize posterior distribution : Compare to: posterior likelihood priors P(x f )

x f-1 =[x,y,w,h] T in frame f-1 Figure present state vector x f-1 define a target window at frame f-1

Past and current measurement: z f is the frame at time f

priors We use the motion pattern at the center of tracked target to estimate priors on the distribution of next state x f

Transition distribution P(x f |x f-1 ) is the transition distribution We model by normal distribution: is the 2D optical flow vector from predicted motion pattern [27] is the covariance matrix from predicted motion pattern distribution Reference: [27]J. Wright and R. Pless, “Analysis of Persistent Motion Patterns Using the 3D Structure Tensor,”Proc. IEEE Workshop Motion and Video Computing,pp , 2005

Likelihood distribution T: template of human object R: region of bounding box at frame f Z: constant : variance respect to appearance change

Define distance measure: t i : template gradient vector r i : region gradient vector M: number of pixels in template If distance large, likelihood small If distance small, likelihood large

Add weight information to adjust appearance change Error account to appearance change -pixels from occlusion region have large angle between t and r thus error E i large -When Ei large weight becomes small

Experimental results Implementation : -Intel Xeon X GHz processor - 10 frames per seconds - cuboid size 10*10*10

Datasets

From UCF Crowd data set 300,350,300,120 frames respectively (a) train station’s concourse (b) ticket gate (c) sidewalk (d) intersection

Experiment 1 white indicate high error error indicate little texture or noisy area intersection scene due to small amount amount of training data

Experiment 2

When occlusion enormous, variance of likelihood increase at frame 56,112,201

Experiment 3

Experiment 4 Errors cause by Innitial states not contain this direction

Experiment 5

Experiment 6

Conclusion We proposed a efficient method for tracking individuals in crowded scenes We solve the error caused by occlusion problem, appearance change, and different direction movement