Download presentation
Presentation is loading. Please wait.
Published byElfrieda O’Brien’ Modified over 9 years ago
1
TRECVID 2004 Tzvetanka (‘Tzveta’) I. Ianeva Lioudmila (‘Mila’) Boldareva Thijs Westerveld Roberto Cornacchia Djoerd Hiemstra (the 1 and only) Arjen P. de Vries Probabilistic Approaches to Video Retrieval The Lowlands Team at TRECVID 2004
2
TRECVID 2004 Generative Models… A statistical model for generating data –Probability distribution over samples in a given ‘language’ M P ( | M )= P ( | M ) P ( | M, ) © Victor Lavrenko, Aug. 2002 aka ‘Language Modelling’
3
TRECVID 2004 Basic question: –What is the likelihood that this document is relevant to this query? P(rel|I,Q) = P(I,Q|rel)P(rel) / P(I,Q) … in Information Retrieval P(I,Q|rel) = P(Q|I,rel)P(I|rel)
4
TRECVID 2004 Retrieval (Query generation) Models P(Q|M 1 ) P(Q|M 4 ) P(Q|M 3 ) P(Q|M 2 ) Query Docs
5
TRECVID 2004 ‘Language Modelling’ Not just ‘English’ But also, the language of –author –newspaper –text document –image Hiemstra or Robertson? ‘Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing.’
6
TRECVID 2004 ‘Language Modelling’ Guardian or Times?Not just ‘English’ But also, the language of –author –newspaper –text document –image
7
TRECVID 2004 ‘Language Modelling’ or ?Not just English! But also, the language of –author –newspaper –text document –image
8
TRECVID 2004 Application to Video Retrieval Matching against multiple modalities gives robustness –GMM of shot (‘dynamic’) or key-frame (‘static’) –MNM of associated text (ASR) –Assume scores for both modalities independent Merge multiple examples’ results RR fashion Interactive search much more successful than manual: the role of user is very important
9
TRECVID 2004 TRECVID 2004: Research Questions Pursued Modelling video content: –How to best model the visual content? –How to best model the textual content? Does audio-visual content modelling contribute to better retrieval results? –Both in manual and interactive? How to translate the topic into a query?
10
TRECVID 2004 Experimental Set-up Build models for each shot –Static, Dynamic, Language Build queries from topics –Automatic as well as manually constructed simple keyword text queries –Select visual example
11
TRECVID 2004 Modelling Visual Content
12
TRECVID 2004 Static Model DocsModels Indexing - Estimate Gaussian Mixture Models from images using EM - Based on feature vector with colour, texture and position information from pixel blocks - Fixed number of components
13
TRECVID 2004 Static Model Indexing –Estimate a Gaussian Mixture Model from each keyframe (using EM) –Fixed number of components (C=8) –Feature vectors contain colour, texture, and position information from pixel blocks:
14
TRECVID 2004 Dynamic Model Indexing: GMM of multiple frames around keyframe Feature vectors extended with time- stamp normalized in [0,1]: 0.5 1
15
TRECVID 2004 Examples
16
TRECVID 2004 Examples
17
TRECVID 2004 Examples
18
TRECVID 2004 Dynamic vs. Static Dynamic model –Retrieves more relevant shots 227 vs. 212 –Places these higher in the result lists MAP 0.0124 vs. 0.0089 Topic 142 (has example from collection) –Dynamic finds 15 relevant vs. static 3;
19
TRECVID 2004 Example: Topic 136 Dynamic rank 1-4 (8 found): Static rank 1-4 (4 found):
20
TRECVID 2004 Dynamic Model Advantages More training data for models –Less sensitive to random initialization Reduced dependency upon selecting appropriate keyframe Spatio-temporal aspects of shot are captured
21
TRECVID 2004 Modelling Textual Content
22
TRECVID 2004 Hierarchical Language Model MNM Smoothed over multiple levels Alpha * P(T|Shot) + Beta * P(T|‘Scene’) + Gamma * P(T|Video) + (1–Alpha–Beta–Gamma) * P(T|Collection) Additional video level is beneficial –On 2003 data, 0.148 vs. 0.134
23
TRECVID 2004 Using Video-OCR ASR –MAP 0.0680 ASR+OCR –MAP 0.0691 –Higher initial precision, more relevant –Difference is not statistically significant Further improvements possible? –Pre-process OCR data? Add captions?
24
TRECVID 2004 MULTI: modalities, examples
25
TRECVID 2004 Multi-modal Retrieval Combining visual and text scores (using independence assumption) gives better results than each modality on its own –Dynamic+ASR (manual) finds 18 additional relevant shots over ASR only (565 vs. 547) –Consistent with TRECVID 2003 finding!
26
TRECVID 2004 Query by Multiple Examples Rank-based vs. Score-based –Round-robin (min{rank}) –CMS (mean{score}) Results: –RR gives better MAP (0.0124 vs. 0.0089) –CMS finds more relevant (239 vs. 227)
27
TRECVID 2004 Query by Multiple Examples A manually made selection of examples gave better results than using all Order effect with RR –Dynamic: video examples first –Static: image examples first –Diffence results from the initial precision
28
TRECVID 2004 Interactive Search
29
TRECVID 2004 Interactive System Based on pre-computed similarity matrix –ASR language model –Static key-frame model (using ALA) Update probability scores from searcher’s feedback –See Boldareva & Hiemstra, CIVR 2004 Select most informative modality automatically –Monitor marginal-entropy to indicate user-system performance, apply to choosing update strategy (text/visual/combined) for next iteration
30
TRECVID 2004 Marginal Entropy ~ MAP
31
TRECVID 2004 Interactive Results Interactive strategy combining multiple modalities is in general beneficial (MAP=0.1900), even when one modality does not perform well Monitoring marginal entropy not yet successful to decide between modalities for update strategy (but, still promising)
32
TRECVID 2004 Surprise, Surprise…
33
TRECVID 2004 Under the Hood Work in Progress Back to the Future – DB+IR!!! –All static model processing has been moved from customised Matlab scripts to MonetDB query plans (CWI’s open-source main-memory DBMS) –Parallel training process on Linux cluster Next steps: –Integration with MonetDB’s XQuery front-end (Pathfinder) and the Cirquid project’s XML-IR system (TIJAH)
34
TRECVID 2004 Conclusions For most topics, neither the static nor the dynamic visual model captures the user information need sufficiently… …averaged over all topics however, it is better to use both modalities than ASR only Working hypothesis: Matching against both modalities gives robustness
35
TRECVID 2004 Conclusions Visual aspects of an information need are best captured by using multiple examples Combining results for multiple (good) examples in round-robin fashion, each ranked on both modalities, gives near- best performance for almost all topics
36
TRECVID 2004 Unfinished Research! Analysis of TRECVID 2004 results –Q: Why is the dynamic model better? More training data, spatio-temporal aspects in model, varying number of components, less dependent on keyframe, … –Q: Why does the audio not help? –Q: Why does the entropy-based monitoring of user-system performance not help?
37
TRECVID 2004 Unfinished Research! Comparison to TRECVID 2003 results –Apply 2004 training procedure to 2003 data –Apply anchor-person detector –Apply 2003 topic processing (& vice-versa) Static model –Full covariance matrices –Varying number of components
38
TRECVID 2004 Future Research Retrieval Model –Apply document generation approach –How to properly model multiple modalities? –How to handle multiple query examples? System Aspects –Integration INEX and TRECVID systems –Top-K query processing
39
TRECVID 2004 Thanks !!!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.