Download presentation
Presentation is loading. Please wait.
Published byMelissa King Modified over 9 years ago
1
Using Probabilistic Models for Multimedia Retrieval Arjen P. de Vries arjen@acm.org (Joint research with Thijs Westerveld) Centrum voor Wiskunde en Informatica E-BioSci/ORIEL Annual Workshop, Sep 3-5, 2003
2
Eiffel tower scary/spooky Eiffel tower Introduction
3
Outline Generative Models –Generative Model –Probabilistic retrieval –Language models, GMMs Experiments –Corel experiments –TREC Video benchmark Conclusions
4
What is a Generative Model? A statistical model for generating data –Probability distribution over samples in a given ‘language’ M P ( | M )= P ( | M ) P ( | M, ) © Victor Lavrenko, Aug. 2002
5
Generative Models video of Bayesian model to that present the disclosure can a on for retrieval in have is probabilistic still of for of using this In that is to only queries queries visual combines visual information look search video the retrieval based search. Both get decision (a visual generic results (a difficult We visual we still needs, search. talk what that to do this for with retrieval still specific retrieval information a as model still LM abstract
6
Unigram and higher-order models Unigram Models N-gram Models Other Models –Grammar-based models, etc. –Mixture models = P ( ) P ( | ) P ( ) P ( ) P ( ) P ( ) P ( | ) P ( | ) P ( | ) © Victor Lavrenko, Aug. 2002
7
The fundamental problem Usually we don’t know the model M –But have a sample representative of that model First estimate a model from a sample Then compute the observation probability P ( | M ( ) ) M © Victor Lavrenko, Aug. 2002
8
Indexing: determine models Indexing –Estimate Gaussian Mixture Models from images using EM –Based on feature vector with colour, texture and position information from pixel blocks –Fixed number of components DocsModels
9
Retrieval: use query likelihood Query: Which of the models is most likely to generate these 24 samples?
10
Probabilistic Image Retrieval ?
11
Query Rank by P(Q|M) P(Q|M 1 ) P(Q|M 4 ) P(Q|M 3 ) P(Q|M 2 )
12
Probabilistic Retrieval Model Text –Rank using probability of drawing query terms from document models Images –Rank using probability of drawing query blocks from document models Multi-modal –Rank using joint probability of drawing query samples from document models
13
Unigram Language Models (LM) –Urn metaphor Text Models P( ) ~ P ( ) P ( ) P ( ) P ( ) = 4/9 * 2/9 * 4/9 * 3/9 © Victor Lavrenko, Aug. 2002
14
Generative Models and IR Rank models (documents) by probability of generating the query Q: P( | ) = 4/9 * 2/9 * 4/9 * 3/9 = 96/9 P( | ) = 3/9 * 3/9 * 3/9 * 3/9 = 81/9 P( | ) = 2/9 * 3/9 * 2/9 * 4/9 = 48/9 P( | ) = 2/9 * 5/9 * 2/9 * 2/9 = 40/9
15
The Zero-frequency Problem Suppose some event not in our example –Model will assign zero probability to that event –And to any set of events involving the unseen event Happens frequently with language It is incorrect to infer zero probabilities –Especially when dealing with incomplete samples ?
16
Smoothing Idea: shift part of probability mass to unseen events Interpolation with background (General English) –Reflects expected frequency of events –Plays role of IDF – +(1- )
17
Image Models Urn metaphor not useful –Drawing pixels useless Pixels carry no semantics –Drawing pixel blocks not effective chances of drawing exact query blocks from document slim Use Gaussian Mixture Models (GMM) –Fixed number of Gaussian components/clusters/concepts
18
? Image Models Expectation-Maximisation (EM) algorithm –iteratively estimate component assignments re-estimate component parameters
19
Component 1Component 2Component 3 Expectation Maximization E M
20
animation Component 1Component 2Component 3 E M
21
Key-frame representation Query model split colour channels Take samples Cr Cb Y DCT coefficients position EM algorithm 675912111941517-9-30001850154014-211 6617135-511315362-4011084454-201 12 668-7133-3015340-5000083733-30-2113 6651011245215340-500008290300014 669-5187-31-515340-50000833-5403 15
22
Scary Formulas
23
Probabilistic Retrieval Model Find document(s) D* with highest probability given query Q (MAP): Equal Priors ML Approximated by minimum Kullback-Leibler divergence
24
Query –Bag of textual terms –Bag of visual blocks Query model –empirical query distribution KL distance Query Models
25
Corel Experiments
26
Testing the Model on Corel 39 classes, ~100 images each Build models from all images Use each image as query –Rank full collection –Compute MAP (mean average precision) AP=average of precision values after each relevant image is retrieved MAP is mean of AP over multiple queries –Relevant from query class
27
Example results Query: Top 5:
28
MAP per Class (mean:.12) English Pub Signs.36 English Country Gardens.33 Arabian Horses.31 Dawn & Dusk.21 Tropical Plants.19 Land of the Pyramids.19 Canadian Rockies.18 Lost Tribes.17 Elephants.17 Tigers.16 Tropical Sea Life.16 Exotic Tropical Flowers.16 Lions.15 Indigenous People.15 Nesting Birds.13 … Sweden.07 Ireland.07 Wildlife of the Galapagos.07 Hawaii.07 Rural France.07 Zimbabwe.07 Images of Death Valley.07 Nepal.07 Foxes & Coyotes.06 North American Deer.06 California Coasts.06 North American Wildlife.06 Peru.05 Alaskan Wildlife.05 Namibia.05
29
Class confusion Query from class A Relevant from class B Queries retrieve images from own class Interesting mix-ups –Beaches – Greek islands –Indigenous people – Lost tribes –English country gardens – Tropical plants – Arabian Horses Similar backgrounds
30
Tuning the Models Yet another subset of Corel data –39 classes, 10 images each –Index as before and calculate MAP Vary model parameters –NY: Number of DCT coefficients from Y channel (1,3,6,10,15,21) –NCbCr: Number of DCT coefficients from CB and Cr channels (0,1,NY) –Xypos: Do/do not use position of samples –C: number of components in GMM (1,2,4,8,16,32)
31
Example Image
32
Example models + samples Varying C, NY=10, NCbCr=1, Xypos=1 C=4C=8C=32
33
Example models + samples Varying NCbCr, NY=10, Xypos=1, C=8 NCbCr=0NCbCr=1NCbCr=10
34
MAP with different parameters NCbCrXyposC=1C=2C=4C=8C=16C=32 00.08.18.20.21 01.09.19.21.20 10.13.22.23 11.13.22.23.22 100.12.22.24.23 101.13.21.24.23
35
Statistical Significance Mixture better than single Gauss (c>1) Small differences between settings –Yet, small differences might be significant Wilcoxon signed-rank test (sign. level 5%) ABDiffRankSignrnk 97961 8886-22.5-2.5 7579444 9088-22.5-2.5 8593855 m=87m=88.4 =15 = Z +,Z - Z + =9Z - =6 =7.5
36
Statistical Significance Results –Optimal number of components at C=8 Fewer components -> insufficient resolution More components -> overfitting –Colour information is important (NCbCr >0) More is better if enough components –Position information undecided although using it never harms
37
Background Matching Query: Top 5:
38
Background Matching Query: Top 5:
39
TREC Experiments
40
TREC Video Track Goal: Promote progress in content-based video retrieval via metric based evaluation 25 Topics –Multimedia descriptions of an information need; 22 had video examples (avg. 2.7 each), 8 had image (avg. 1.9 each) Task is to return up to 100 best shots –NIST assessors judged top 50 shots from each submitted result set; subsequent full judgements showed only minor variations in performance
41
Video Data Used mainly Internet Archive –advertising, educational, industrial, amateur films 1930-1970 –Noisy, strange color, but real archive data –73.3 hours, partitioned as follows:
42
Video Representation Video as sequence of shots (all TREC) –Common ground truth shot set used in evaluation; 14,524 shots Shot = image + text (CWI specific) : –Key-frame (middle frame of shot) –ASR Speech Transcript (LIMSI)
43
Search Topics Requesting shots with specific or generic: – People, Things, Locations, Activities George WashingtonFootball players
44
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Golden Gate BridgeSailboats
45
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Overhead views of cities
46
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Rocket taking off
47
Search Topics Summary Requested shots with specific/generic: –Combinations of the above: People spending leisure time at the beach Locomotive approaching the viewer Microscopic views of living cells
48
Experiments …with official TREC measures –Query representation –Textual/Visual/Combined runs …without measures; inspecting visual similarity –Selecting components –Colour vs. texture –EM initialisation
49
Measures Precision –fraction of retrieved documents that is relevant Recall –fraction of relevant documents that is retrieved Average Precision –precision averaged over different levels of recall Mean Average Precision (MAP) –mean of average precision over all queries
50
Textual and Visual runs Textual –Short Queries (Topic description) –Long Queries (Topic description + transcripts from video examples) Visual –All examples –Best examples Combined –Simply add textual and visual log-likelihood scores (joint probability of seeing both query terms and query blocks)
51
Textual and Visual runs Textual > Visual Tlong > Tshort Combining overall not useful If both visual and textual runs good, combining improves
52
Visual runs Scores for purely visual runs low (MAP.037) Drop further when video examples are removed from relevance judgements
53
Observation CBR successful under two conditions: –the query example is derived from the same source as the target objects –a domain-specific detector is at hand
54
vt076: Find shots with James H. Chandler Top 10:
55
Retrieval Results Non-interactive results disappointing –MAP across all participants/systems.056 –Ignoring ASR runs, MAP drops to.044 Only Known-item retrieval possible –MAP for queries with examples from collection.094 –MAP without these.026 (-40% from average) No significant differences between variants
56
Selecting Query Images Find shots of the Golden Gate Bridge Full topic –use all examples Best example –compute results for individual examples and find best Manual example –manually select good example from ones available in topic
57
Selecting Query Images In general Best > Full (MAP full: 0.0287, best: 0.444) Sometimes Full > Best
58
Selecting Components Query articulation can improve retrieval effectiveness, but requires enormous user effort [lowlands2001] Document models (GMM), allow for easy selection of important regions [LL10]
59
Selecting Components For each topic we manually selected meaningful components No improvement in MAP Perhaps useful for more general queries (feature detection?) –Further investigation necessary
60
Component Search
61
1-3: 18:
62
Being lucky… 1-3: 101768 Rel.: Visually similar by chanceVisual NOT similar Keyframe does not represent shot
63
Informal Results Analysis Forget about MAP scores Investigate two aspects of experimental results –How is image similarity captured Look at top 10 results –How do visual results contribute to (MAP) scores Look at key-frames from relevant shots in top 100 Qualitative observations
64
Some Observations Colour dominates texture Homogeneous Queries –Semantically similar results –…or at least visually similar Heterogeneous queries –Results dominated by subset of query
65
Some Observations Colour dominates texture
67
Some Observations Colour dominates texture Homogeneous queries give intuitive results –Semantically similar –... or at least visually
68
Homogeneous query with semantics
69
Homogeneous query no semantics, but visual similarity Top 5 audience Top 5 grass: Full queryAudience componentGrass component
70
Some Observations Colour dominates texture Homogeneous queries give intuitive results –Semantically similar –... or at least visually Results for heterogeneous queries often dominated by part of samples
71
Heterogeneous query full query MMMMM
72
Heterogenous query grass samples MMMMM
73
Heterogeneous query Possible explanations domination sky samples –no document in the collection explains grass samples well –sky samples well explained by any document (i.e. background probability is high) Smoothing with background probabilities might help
74
Heterogenous queries with smoothing MMMMM Smoothing seems to help somewhat, but problem not solved Looking for model which favors documents with balanced individual sample scores
75
Controlled Experiments What determines visual similarity in the generative probabilistic model Small special purpose collections created from the large TREC video collection 1.Emphasis on colour information 2.Role of initialisation of the mixture models
76
Colour Experiments Collection with 2 copies of each frame –Original colour image –Greyscale version Build models –Models can describe colour and texture Search using colour and greyscale queries
77
Colour Experiments M 1A M 1B M 2B M 2A M NB M NA P(| )~P(| ) M iA M iB P(| )~P(| ) M iA M iB
78
Distance between pairs models without colour Results P(| ) Ranks 2.9 P(| ) Results Ranks 2.0
79
Distance between pairs models with colour Results Ranks 89.7 P(| ) Ranks 7.3 P(| ) Results Indeed colour dominates texture
80
Colour Experiments Conclusion: –Model from colour image only captures colour information Queries Models rank 1 rank 7.3 rank 89.7
81
EM initialisation EM sensitive to initialisation –Build collection with several models for each frame –Compare scores for different models from same frame –Concentrate on top ranks
82
EM initialisation Collection with: –2 Videos –5 frames / shot –10 models / frame From random initialisations Models from same frame should have similar scores
83
EM initialisation
84
Results –Models from query frame all near top list Mean rank: 8.06, std.dev.5.95 –Models from same shot closer together than models from other frames –In general: higher ranking frames have their models closer together Although EM sensitive to initialisation, this does not affect ranking much
85
Concluding Remarks
86
Lessons TREC-10 Generalization remains a problem –Good results examples from collection Textual search outperforms visual search –Even with topics designed for visual retrieval! Successful visual retrieval often traces down to involving luck (background, known-item) Combining textual and visual results possible in the presented framework –When both have reasonable performance, combination outperforms individual runs
87
Lessons TREC-10 Components queries retrieve intuitive results Convenient for query articulation! Color dominates texture Sensitivity EM to initialization does not harm results Note: Findings specific for model, but at least suggest hypotheses for others to investigate
88
Need 4 Test Collections Results on one collection do not automatically transfer to another –Multiple collections needed to conclude one technique is better than other What is a good Test Collection? –Should be representative of realistic task This is what TREC tries to achieve –Results should be measurable Like when using Corel
89
Plans for TREC-11 Better video representation –More frames per shot –Audio GMM (on MFCC) Spatial and temporal aspects –Shot = background + “objects” Special research interest in the right balance between interactive query articulation and (semi-)automatic query formulation
90
Future plans Balancing results for heterogeneous queries Propagating generic concepts
91
Care for more? A probabilistic Multimedia Retrieval Model and Its Evaluation, Thijs Westerveld, Arjen de Vries, Alex van Ballegooij, Franciska de Jong and Djoerd Hiemstra, EURASIP journal on Applied Signal Processing 2003:2 arjen@acm.org
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.