Finding Better Answers in Video Using Pseudo Relevance Feedback Informedia Project Carnegie Mellon University Carnegie Mellon Question Answering from Errorful Multimedia Streams ARDA AQUAINT
Carnegie Mellon 2 Outline Pseudo-Relevance Feedback for Imagery Experimental Evaluation Results Conclusions
Carnegie Mellon 3 Motivation Question Answering from multimedia streams Questions contain text and visual components Want a good image that represents the ‘answer’ Improve performance of images retrieved as answers Relevance feedback works for text retrieval !
Carnegie Mellon 4 Finding Similar Images by Color
Carnegie Mellon 5 Finding Similar Scenes
Carnegie Mellon 6 Similarity Challenge: Images containing similar content
Carnegie Mellon 7 What is Pseudo Relevance Feedback Relevance Feedback (Human intervention) QUERY SYSTEMRESULTS Relevance Judgment HUMAN Feedback Why Pseudo? QUERY SYSTEMRESULTS Feedback without human intervention
Carnegie Mellon 8 Original System Architecture Simply weighted linear combination of video, audio and text retrieval score Retrieval Agents Query TextImage Text ScoreImage Score Final Score
Carnegie Mellon 9 System Architecture with PRF New step: Classification through Pseudo Relevance Feedback (PRF) Combine with all other information agents (text, image) Query TextImage Final Score Retrieval Agents Text Score Image Score PRF Score
Carnegie Mellon 10 Classification from Modified PRF Automatic retrieval technique Modification: use negative data as feedback Step-by-step Run base retrieval algorithm on image collection K-Nearest neighbor (KNN) on color and texture Build classifier Negative examples: least relevant images in the collection Positive examples: image queries Classify all data in the collection to obtain ranked results
Carnegie Mellon 11 The Basic PRF Algorithm for Image Retrieval Input Query Examples q 1 … q n Target Examples t 1 … t n ========================= Output Final score F i and final ranking for every target t i ========================= Algorithm Given initial score s 0 i for each ti based on f 0 (t i, q 1 … q n ) Using an initial similarity measure f 0 as a base Iterate k = 1 … max Given score s k i, sample positive instances p k i and negative instances n k i using sampling strategy S Compute updated retrieval score s i k+1 = f i k+1 (t i ) where f i k+1 is trained/learned using n k i,p k i Combine all scores for final score F i =g(s 0 … s max )
Carnegie Mellon 12 Analysis: PRF on Synthetic Data
Carnegie Mellon 13 PRF on Synthetic Data
Carnegie Mellon 14 Evaluation using the 2002 TREC Video Retrieval Task Independent collection, queries, relevant results available Search Collection Total Length: hours MPEG-1 format Collected from Internet Archive and Open Video websites, documentaries from the ‘50s 14,000 shots 292,000 I-frames (images) Query 25 queries Text, Image(Optional), Video(Optional)
Summary of ’02 Video Queries
Carnegie Mellon 16 Analysis of Queries (2002) Specific item or person Eddie Rickenbacker, James Chandler, George Washington, Golden Gate Bridge, Price Tower in Bartlesville, OK Specific fact Arch in Washington Square Park in NYC, map of continental US Instances of a category football players, overhead views of cities, one or more women standing in long dresses Instances of events/activities people spending leisure time at the beach, one or more musicians with audible music, crowd walking in an urban environment, locomotive approaching the viewer
Carnegie Mellon 17 Sample Query and Target Query: Find pictures of Harry Hertz, Director of the National Quality Program, NIST
Carnegie Mellon 18 Sample Query and Target Query: Find pictures of Harry Hertz, Director of the National Quality Program, NIST Speech: We’re looking for people that have a broad range of expertise that have business knowledge that have knowledge on quality management on quality improvement and in particular … OCR: H,arry Hertz a Director aro 7 wa-,i,,ty Program,Harry Hertz a Director
Carnegie Mellon 19 Example Images
Carnegie Mellon 20 Example Images Selected for PRF
Carnegie Mellon 21 Combination of Agents Multiple Agents Text Retrieval Agent Base Image Retrieval Agent Nearest Neighbor on Color Nearest Neighbor on Texture Classification PRF Agent Combination of multiple agents Convert scores to posterior probability Linear combination of probabilities
Carnegie Mellon Results *Video OCR was not relevant in this collection MethodPrecisionRecallMAP Speech Transcripts only SR + Color/Texture SR + Color/Texture + PRF
Carnegie Mellon 23 Distance Function for Query 75
Carnegie Mellon 24 Distance Function for Query 89
Carnegie Mellon 25 Effect of Pos/Neg Ratio and Combination Weight
Carnegie Mellon 26 Selection of Negative Images + Combination
Carnegie Mellon 27 Discussion & Future Work Discussion Results are sensitive to queries with small numbers of answers Images alone cannot fully represent the query semantics Future Work Incorporate more agents Utilize the relationship between multiple agent information Better combination scheme Include web image search (e.g. Google) as query expansion
Carnegie Mellon 28 Conclusions Pseudo-relevance feedback works for text retrieval This is not directly applicable to image retrieval from video due to low precision in the top answers Negative PRF was effective for finding better images