Using Probabilistic Models for Multimedia Retrieval Arjen P. de Vries (Joint research with Thijs Westerveld) Centrum voor Wiskunde en Informatica E-BioSci/ORIEL Annual Workshop, Sep 3-5, 2003
Eiffel tower scary/spooky Eiffel tower Introduction
Outline Generative Models –Generative Model –Probabilistic retrieval –Language models, GMMs Experiments –Corel experiments –TREC Video benchmark Conclusions
What is a Generative Model? A statistical model for generating data –Probability distribution over samples in a given ‘language’ M P ( | M )= P ( | M ) P ( | M, ) © Victor Lavrenko, Aug. 2002
Generative Models video of Bayesian model to that present the disclosure can a on for retrieval in have is probabilistic still of for of using this In that is to only queries queries visual combines visual information look search video the retrieval based search. Both get decision (a visual generic results (a difficult We visual we still needs, search. talk what that to do this for with retrieval still specific retrieval information a as model still LM abstract
Unigram and higher-order models Unigram Models N-gram Models Other Models –Grammar-based models, etc. –Mixture models = P ( ) P ( | ) P ( ) P ( ) P ( ) P ( ) P ( | ) P ( | ) P ( | ) © Victor Lavrenko, Aug. 2002
The fundamental problem Usually we don’t know the model M –But have a sample representative of that model First estimate a model from a sample Then compute the observation probability P ( | M ( ) ) M © Victor Lavrenko, Aug. 2002
Indexing: determine models Indexing –Estimate Gaussian Mixture Models from images using EM –Based on feature vector with colour, texture and position information from pixel blocks –Fixed number of components DocsModels
Retrieval: use query likelihood Query: Which of the models is most likely to generate these 24 samples?
Probabilistic Image Retrieval ?
Query Rank by P(Q|M) P(Q|M 1 ) P(Q|M 4 ) P(Q|M 3 ) P(Q|M 2 )
Probabilistic Retrieval Model Text –Rank using probability of drawing query terms from document models Images –Rank using probability of drawing query blocks from document models Multi-modal –Rank using joint probability of drawing query samples from document models
Unigram Language Models (LM) –Urn metaphor Text Models P( ) ~ P ( ) P ( ) P ( ) P ( ) = 4/9 * 2/9 * 4/9 * 3/9 © Victor Lavrenko, Aug. 2002
Generative Models and IR Rank models (documents) by probability of generating the query Q: P( | ) = 4/9 * 2/9 * 4/9 * 3/9 = 96/9 P( | ) = 3/9 * 3/9 * 3/9 * 3/9 = 81/9 P( | ) = 2/9 * 3/9 * 2/9 * 4/9 = 48/9 P( | ) = 2/9 * 5/9 * 2/9 * 2/9 = 40/9
The Zero-frequency Problem Suppose some event not in our example –Model will assign zero probability to that event –And to any set of events involving the unseen event Happens frequently with language It is incorrect to infer zero probabilities –Especially when dealing with incomplete samples ?
Smoothing Idea: shift part of probability mass to unseen events Interpolation with background (General English) –Reflects expected frequency of events –Plays role of IDF – +(1- )
Image Models Urn metaphor not useful –Drawing pixels useless Pixels carry no semantics –Drawing pixel blocks not effective chances of drawing exact query blocks from document slim Use Gaussian Mixture Models (GMM) –Fixed number of Gaussian components/clusters/concepts
? Image Models Expectation-Maximisation (EM) algorithm –iteratively estimate component assignments re-estimate component parameters
Component 1Component 2Component 3 Expectation Maximization E M
animation Component 1Component 2Component 3 E M
Key-frame representation Query model split colour channels Take samples Cr Cb Y DCT coefficients position EM algorithm
Scary Formulas
Probabilistic Retrieval Model Find document(s) D* with highest probability given query Q (MAP): Equal Priors ML Approximated by minimum Kullback-Leibler divergence
Query –Bag of textual terms –Bag of visual blocks Query model –empirical query distribution KL distance Query Models
Corel Experiments
Testing the Model on Corel 39 classes, ~100 images each Build models from all images Use each image as query –Rank full collection –Compute MAP (mean average precision) AP=average of precision values after each relevant image is retrieved MAP is mean of AP over multiple queries –Relevant from query class
Example results Query: Top 5:
MAP per Class (mean:.12) English Pub Signs.36 English Country Gardens.33 Arabian Horses.31 Dawn & Dusk.21 Tropical Plants.19 Land of the Pyramids.19 Canadian Rockies.18 Lost Tribes.17 Elephants.17 Tigers.16 Tropical Sea Life.16 Exotic Tropical Flowers.16 Lions.15 Indigenous People.15 Nesting Birds.13 … Sweden.07 Ireland.07 Wildlife of the Galapagos.07 Hawaii.07 Rural France.07 Zimbabwe.07 Images of Death Valley.07 Nepal.07 Foxes & Coyotes.06 North American Deer.06 California Coasts.06 North American Wildlife.06 Peru.05 Alaskan Wildlife.05 Namibia.05
Class confusion Query from class A Relevant from class B Queries retrieve images from own class Interesting mix-ups –Beaches – Greek islands –Indigenous people – Lost tribes –English country gardens – Tropical plants – Arabian Horses Similar backgrounds
Tuning the Models Yet another subset of Corel data –39 classes, 10 images each –Index as before and calculate MAP Vary model parameters –NY: Number of DCT coefficients from Y channel (1,3,6,10,15,21) –NCbCr: Number of DCT coefficients from CB and Cr channels (0,1,NY) –Xypos: Do/do not use position of samples –C: number of components in GMM (1,2,4,8,16,32)
Example Image
Example models + samples Varying C, NY=10, NCbCr=1, Xypos=1 C=4C=8C=32
Example models + samples Varying NCbCr, NY=10, Xypos=1, C=8 NCbCr=0NCbCr=1NCbCr=10
MAP with different parameters NCbCrXyposC=1C=2C=4C=8C=16C=
Statistical Significance Mixture better than single Gauss (c>1) Small differences between settings –Yet, small differences might be significant Wilcoxon signed-rank test (sign. level 5%) ABDiffRankSignrnk m=87m=88.4 =15 = Z +,Z - Z + =9Z - =6 =7.5
Statistical Significance Results –Optimal number of components at C=8 Fewer components -> insufficient resolution More components -> overfitting –Colour information is important (NCbCr >0) More is better if enough components –Position information undecided although using it never harms
Background Matching Query: Top 5:
Background Matching Query: Top 5:
TREC Experiments
TREC Video Track Goal: Promote progress in content-based video retrieval via metric based evaluation 25 Topics –Multimedia descriptions of an information need; 22 had video examples (avg. 2.7 each), 8 had image (avg. 1.9 each) Task is to return up to 100 best shots –NIST assessors judged top 50 shots from each submitted result set; subsequent full judgements showed only minor variations in performance
Video Data Used mainly Internet Archive –advertising, educational, industrial, amateur films –Noisy, strange color, but real archive data –73.3 hours, partitioned as follows:
Video Representation Video as sequence of shots (all TREC) –Common ground truth shot set used in evaluation; 14,524 shots Shot = image + text (CWI specific) : –Key-frame (middle frame of shot) –ASR Speech Transcript (LIMSI)
Search Topics Requesting shots with specific or generic: – People, Things, Locations, Activities George WashingtonFootball players
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Golden Gate BridgeSailboats
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Overhead views of cities
Search Topics Requesting shots with specific or generic: –People, Things, Locations, Activities Rocket taking off
Search Topics Summary Requested shots with specific/generic: –Combinations of the above: People spending leisure time at the beach Locomotive approaching the viewer Microscopic views of living cells
Experiments …with official TREC measures –Query representation –Textual/Visual/Combined runs …without measures; inspecting visual similarity –Selecting components –Colour vs. texture –EM initialisation
Measures Precision –fraction of retrieved documents that is relevant Recall –fraction of relevant documents that is retrieved Average Precision –precision averaged over different levels of recall Mean Average Precision (MAP) –mean of average precision over all queries
Textual and Visual runs Textual –Short Queries (Topic description) –Long Queries (Topic description + transcripts from video examples) Visual –All examples –Best examples Combined –Simply add textual and visual log-likelihood scores (joint probability of seeing both query terms and query blocks)
Textual and Visual runs Textual > Visual Tlong > Tshort Combining overall not useful If both visual and textual runs good, combining improves
Visual runs Scores for purely visual runs low (MAP.037) Drop further when video examples are removed from relevance judgements
Observation CBR successful under two conditions: –the query example is derived from the same source as the target objects –a domain-specific detector is at hand
vt076: Find shots with James H. Chandler Top 10:
Retrieval Results Non-interactive results disappointing –MAP across all participants/systems.056 –Ignoring ASR runs, MAP drops to.044 Only Known-item retrieval possible –MAP for queries with examples from collection.094 –MAP without these.026 (-40% from average) No significant differences between variants
Selecting Query Images Find shots of the Golden Gate Bridge Full topic –use all examples Best example –compute results for individual examples and find best Manual example –manually select good example from ones available in topic
Selecting Query Images In general Best > Full (MAP full: , best: 0.444) Sometimes Full > Best
Selecting Components Query articulation can improve retrieval effectiveness, but requires enormous user effort [lowlands2001] Document models (GMM), allow for easy selection of important regions [LL10]
Selecting Components For each topic we manually selected meaningful components No improvement in MAP Perhaps useful for more general queries (feature detection?) –Further investigation necessary
Component Search
1-3: 18:
Being lucky… 1-3: Rel.: Visually similar by chanceVisual NOT similar Keyframe does not represent shot
Informal Results Analysis Forget about MAP scores Investigate two aspects of experimental results –How is image similarity captured Look at top 10 results –How do visual results contribute to (MAP) scores Look at key-frames from relevant shots in top 100 Qualitative observations
Some Observations Colour dominates texture Homogeneous Queries –Semantically similar results –…or at least visually similar Heterogeneous queries –Results dominated by subset of query
Some Observations Colour dominates texture
Some Observations Colour dominates texture Homogeneous queries give intuitive results –Semantically similar –... or at least visually
Homogeneous query with semantics
Homogeneous query no semantics, but visual similarity Top 5 audience Top 5 grass: Full queryAudience componentGrass component
Some Observations Colour dominates texture Homogeneous queries give intuitive results –Semantically similar –... or at least visually Results for heterogeneous queries often dominated by part of samples
Heterogeneous query full query MMMMM
Heterogenous query grass samples MMMMM
Heterogeneous query Possible explanations domination sky samples –no document in the collection explains grass samples well –sky samples well explained by any document (i.e. background probability is high) Smoothing with background probabilities might help
Heterogenous queries with smoothing MMMMM Smoothing seems to help somewhat, but problem not solved Looking for model which favors documents with balanced individual sample scores
Controlled Experiments What determines visual similarity in the generative probabilistic model Small special purpose collections created from the large TREC video collection 1.Emphasis on colour information 2.Role of initialisation of the mixture models
Colour Experiments Collection with 2 copies of each frame –Original colour image –Greyscale version Build models –Models can describe colour and texture Search using colour and greyscale queries
Colour Experiments M 1A M 1B M 2B M 2A M NB M NA P(| )~P(| ) M iA M iB P(| )~P(| ) M iA M iB
Distance between pairs models without colour Results P(| ) Ranks 2.9 P(| ) Results Ranks 2.0
Distance between pairs models with colour Results Ranks 89.7 P(| ) Ranks 7.3 P(| ) Results Indeed colour dominates texture
Colour Experiments Conclusion: –Model from colour image only captures colour information Queries Models rank 1 rank 7.3 rank 89.7
EM initialisation EM sensitive to initialisation –Build collection with several models for each frame –Compare scores for different models from same frame –Concentrate on top ranks
EM initialisation Collection with: –2 Videos –5 frames / shot –10 models / frame From random initialisations Models from same frame should have similar scores
EM initialisation
Results –Models from query frame all near top list Mean rank: 8.06, std.dev.5.95 –Models from same shot closer together than models from other frames –In general: higher ranking frames have their models closer together Although EM sensitive to initialisation, this does not affect ranking much
Concluding Remarks
Lessons TREC-10 Generalization remains a problem –Good results examples from collection Textual search outperforms visual search –Even with topics designed for visual retrieval! Successful visual retrieval often traces down to involving luck (background, known-item) Combining textual and visual results possible in the presented framework –When both have reasonable performance, combination outperforms individual runs
Lessons TREC-10 Components queries retrieve intuitive results Convenient for query articulation! Color dominates texture Sensitivity EM to initialization does not harm results Note: Findings specific for model, but at least suggest hypotheses for others to investigate
Need 4 Test Collections Results on one collection do not automatically transfer to another –Multiple collections needed to conclude one technique is better than other What is a good Test Collection? –Should be representative of realistic task This is what TREC tries to achieve –Results should be measurable Like when using Corel
Plans for TREC-11 Better video representation –More frames per shot –Audio GMM (on MFCC) Spatial and temporal aspects –Shot = background + “objects” Special research interest in the right balance between interactive query articulation and (semi-)automatic query formulation
Future plans Balancing results for heterogeneous queries Propagating generic concepts
Care for more? A probabilistic Multimedia Retrieval Model and Its Evaluation, Thijs Westerveld, Arjen de Vries, Alex van Ballegooij, Franciska de Jong and Djoerd Hiemstra, EURASIP journal on Applied Signal Processing 2003:2