Download presentation
Presentation is loading. Please wait.
Published byMarybeth Hardy Modified over 9 years ago
1
Object detection, tracking and event recognition: the ETISEO experience Andrea Cavallaro Multimedia and Vision Lab Queen Mary, University of London andrea.cavallaro@elec.qmul.ac.uk
2
Outline QMUL’s object tracking and event recognition Change detection and object tracking Event recognition ETISEO Evaluation: protocol, data, ground truth Impact Improvements of future evaluation campaigns Conclusions … and an advert
3
Outline QMUL’s object tracking and event recognition Change detection and object tracking Event recognition ETISEO Evaluation: protocol, data, ground truth Impact Improvements of future evaluation campaigns Conclusions … and an advert
4
Prior system for event detection http://www.elec.qmul.ac.uk/staffinfo/andrea/CREDS-help.html RATP/ CREDS
5
Introduction QMUL Detection, Tracking, Event Recognition (Q-DTE) initially designed for Event Detection and Tracking in metro stations modified to respond to ETISEO components: Moving object detection Background subtraction with noise modeling Object tracking Graph matching Composite target distance based on multiple object features Event recognition M. Taj, E. Maggio, A. Cavallaro “Multi-feature graph-based object tracking” Proc. of CLEAR Workshop - LNCS 4122, 2006
6
Object detection and tracking Change detection Statistical change detection Gaussians on colour components Noise filtering Contrast enhancement Problem: data association after object detection Appearance/disappearance of objects False detections due to clutter and noisy observations
7
Moving object segmentation Motion detection through frame difference Problem D = {d k }, d k 0 even if there is no structural change in k current framereference frame difference frame D
8
Adaptive threshold for change detection Noise modelling Test statistics Significance test Hyp. H 0 : “no changes in k”, camera noise N(0, )
9
Tracking Graph matching using weighted features Data association verified throughout several frames to validate the correctness of the tracks Support track recovery in occlusion scenarios Features centre of mass velocity bounding box colour velocity appearance size position
10
Graph matching: full graph v ( x 1 1 ) v ( x 2 1 ) v ( x 3 1 ) v ( x 4 1 ) v ( x 1 3 ) v ( x 2 3 ) v ( x 3 3 ) v ( x 4 3 ) v ( x 1 2 ) v ( x 2 2 ) v ( x 3 2 ) V 1 V 3 v ( x 1 1 ) v ( x 2 1 ) v ( x 3 1 ) v ( x 4 1 ) v ( x 1 3 ) v ( x 2 3 ) v ( x 3 3 ) v ( x 4 3 ) V 2 v ( x 1 2 ) v ( x 2 2 ) v ( x 3 2 )
11
v ( x 1 1 ) v ( x 2 1 ) v ( x 3 1 ) v ( x 4 1 ) v ( x 1 3 ) v ( x 2 3 ) v ( x 3 3 ) v ( x 4 3 ) v ( x 1 2 ) v ( x 2 2 ) v ( x 3 2 ) Graph matching: max path cover V 1 V 3 V 2
12
Experimental framework Key parameters noise variance: 1.8 kernel size: 3x3 feature weights position α = 0.40 velocity β = 0.30 appearance γ = 0.15 size δ = 0.15 Determined using CLEAR dataset/metrics Moving object detection accuracy / precision (MODA / MODP) Moving object tracking accuracy / precision (MOTA / MOTP)
13
Event recognition
18
Outline QMUL’s object tracking and event recognition Change detection and object tracking Event recognition ETISEO Evaluation: protocol, data, ground truth Impact Improvements of future evaluation campaigns Conclusions … and an advert
19
ETISEO Impact Promote evaluation Formal and objective evaluation is (urgently) needed Data collection and distribution time consuming! common ground for research Priority sequences Use of an existing XML schema Discussion forum Choice of performance measures and experimental data is not obvious
20
Improvements Involve stakeholders at earlier stages More input from end users what do they want / need? costs / weights of errors Involve (more) researchers from the beginning Facilitate understanding of the protocol Fix errors / ambiguities early Use training/testing dataset see i-Lids and CLEAR Maybe private dataset too Give meaning to measures what is the “value” of these numbers? e.g., compare with a naïve result what is the “value” of a difference of (e.g.) 0.1?
21
Improvements of future evaluation campaigns Are we evaluating too many things simultaneously? Too many variables Do we need so many measures? remove redundant measures Is the ground truth really “truth”? statistical analysis / more annotators / confidence level Should we distribute the evaluation tool / ground-truth earlier? Are we happy with the current demarcation of regions / definition of events? Do we want to evaluate all the event types together? should we focus on subsets of events and move on progressively Is the dataset too heterogeneous? Can we generalize the results obtained so far? Questions
22
Conclusions QMUL submission Statistical colour change detection Multi-feature weighted graph matching Event recognition module: evolution from CREDS 2005. Next: extend to 3D Feedback on ETISEO Evaluation + discussion Extend the community / do not duplicate efforts Metrics More information http://www.elec.qmul.ac.uk/staffinfo/andrea … and an advert
23
IEEE International Conference on Advanced Video and Signal based Surveillance IEEE AVSS 2007 London (UK) 5-7 September 2007 Paper submission: 28 February 2007
24
Acknowledgments Murtaza Taj Emilio Maggio
25
Evaluation metric http://www.elec.qmul.ac.uk/staffinfo/andrea/CREDS-help.html Maximum Delay Maximum Score Accepted anticipation Unaccepted anticipation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.