Presentation is loading. Please wait.

Presentation is loading. Please wait.

JOSEF SIVIC AND ANDREW ZISSERMAN PRESENTERS: ILGE AKKAYA & JEANNETTE CHANG MARCH 1, 2011 Efficient Visual Search for Objects in Videos.

Similar presentations


Presentation on theme: "JOSEF SIVIC AND ANDREW ZISSERMAN PRESENTERS: ILGE AKKAYA & JEANNETTE CHANG MARCH 1, 2011 Efficient Visual Search for Objects in Videos."— Presentation transcript:

1 JOSEF SIVIC AND ANDREW ZISSERMAN PRESENTERS: ILGE AKKAYA & JEANNETTE CHANG MARCH 1, 2011 Efficient Visual Search for Objects in Videos

2 Introduction Text Query Results: Documents Image Query Results: Frames

3 State-of-the-Art before this paper… Text-based search for images (Google Images) Object recognition  Barnard, et al. (2003): “Matching words and pictures”  Sivic, et al. (2005): “Discovering objects and their location in images”  Sudderth, et al. (2005): “Learning hierarchical models of scenes, objects, and parts” Scene classification  Fei-Fei and Perona (2005): “A Bayesian hierarchical model for learning natural scene categories”  Quelhas, et al. (2005): “Modeling scenes with local descriptors and latent aspects”  Lazebnik, et al. (2006): “Beyond bag of features: Spatial pyramid matching for recognizing natural scene categories”

4 Introduction (cont.) Retrieve specific objects vs. categories of objects/scenes (“Camry” logo vs. cars) Employ text retrieval techniques for visual search, with images as queries and results Why Text Retrieval Approach?  Matches essentially precomputed so that no delay at run time  Any object in video can be retrieved without modification of descriptors originally built for video

5 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

6 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

7 Pre-Processing (Offline) 1. For each frame, detect affine covariant regions. 2. Track the regions through video and reject unstable regions 3. Build visual vocabulary 4. Remove stop-listed visual words 5. Compute tf-idf weighted document frequency vectors 6. Built inverted file-indexing structure

8 Typically ~1200 regions / frame (720x576) Elliptical regions Each region represented by 128-dimensional SIFT vector SIFT features provide invariance against affine transformations Detection of Affine Covariant Regions

9 Two types of affine covariant regions: 1.Shape-Adapted(SA): Mikolajczyk et al. Elliptical Shape adaptation about a Harris interest point Often centered on corner-like features 1.Maximally-Stable(MS): Proposed by Matas et al. Intensity watershed image segmentation High-contrast blobs

10 Pre-Processing (Offline) 1. For each frame, detect affine covariant regions. 2. Track the regions through video and reject unstable regions 3. Build visual vocabulary 4. Remove stop-listed visual words 5. Compute tf-idf weighted document frequency vectors 6. Built inverted file-indexing structure

11 Tracking regions through video and rejecting unstable regions Any region that does not survive for 3+ frames is rejected These regions are not potentially interesting Reduces number of regions/frame to approx. 50% (~600/frame)

12 Pre-Processing (Offline) 1. For each frame, detect affine covariant regions. 2. Track the regions through video and reject unstable regions 3. Build visual vocabulary 4. Remove stop-listed visual words 5. Compute tf-idf weighted document frequency vectors 6. Built inverted file-indexing structure

13 Visual Indexing Using Text-Retrieval Methods TEXTIMAGE Represent words by the “stems” ‘write’ ‘writing’ ‘write’ ‘written’ mapped to Cluster similar regions into ‘visual words’ Stop-list common words ‘a/an/the’Stop-list common visual words Rank search results according to how close the query words occur within retrieved document Use spatial information to check retrieval consistency

14 Visual Vocabulary Purpose: Cluster regions from multiple frames into fewer groups called ‘visual words’ Each descriptor: 128-vector K-means clustering (explain more) ~300K descriptors mapped into 16K visual words (600 regions/frame x ~500 frames) (6000 SA, 10000 MS regions used)

15 K-Means Clustering Purpose: Cluster N data points (SIFT descriptors) into K clusters (visual words) K = desired number of cluster centers (mean points) Step 1: Randomly guess K mean points

16 Step 2: Calculate nearest mean point to assign each data point to a cluster center In this paper, Mahalanobis distance is used to determine ‘nearest cluster center’. where ∑ is the covariance matrix for all descriptors, x 2 is the length 128 mean vector and x 1 ’s are the descriptor vectors(i.e. data points)

17 Step 3: Recalculate cluster centers and distances, repeat until stationarity

18 Samples of normalized affine covariant regions Examples of Clusters of Regions

19 Pre-Processing (Offline) 1. For each frame, detect affine covariant regions. 2. Track the regions through video and reject unstable regions 3. Build visual vocabulary 4. Remove stop-listed visual words 5. Compute tf-idf weighted document frequency vectors 6. Built inverted file-indexing structure

20 Remove Stop-Listed Words Analogy to text-retrieval: ‘a’, ‘and’, ‘the’ … are not distinctive words Common words cause mismatches 5-10% of the most common visual words are stopped 800-1600 / 16000 words are stopped (Upper row) Matches before stop- listing (Lower row) Matches after stop- listing

21 Pre-Processing (Offline) 1. For each frame, detect affine covariant regions. 2. Track the regions through video and reject unstable regions 3. Build visual vocabulary 4. Remove stop-listed visual words 5. Compute tf-idf weighted document frequency vectors 6. Built inverted file-indexing structure

22 tf-idf Weighting (term frequency-inverse document frequency weighting) n id : #of occurrences of word(visual word) i in document(frame) d n d : total number of words in document d N i : total number of documents containing term I N : number of documents in the database t i : weighted word frequency

23 Each document(frame) represented by: where v = number of visual words in vocabulary And v d = the tf-idf vector of the particular frame d

24 Inverted File Indexing Visual Word Index 1 2 … N Found in Frames: 1,4,5 1,2,10 …

25 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

26 Real-Time Query 1. Determine the set of visual words found within the query region 2. Retrieve keyframes based on visual word frequencies (Ns = 500) 3. Re-rank retrieved keyframes using spatial consistency

27 Retrieve keyframes based on visual word frequencies v q : vector containing visual word frequencies corresponding to query region is computed the normalized scalar product of v q with v d ’s are computed:

28 Spatial Consistency Voting Analogy: Google text document retrieval Matched covariant regions in the retrieved frames should have a similar spatial arrangement Search area: 15 nearest spatial neighbors of each match Each neighboring region which also matches in the retrieved frame, votes for the frame

29 Spatial Consistency Voting Matched pair of words (A,B) Each region in defined search area in both frames casts a vote For the match (A,B) (upper row)Matches after stop-listing (lower row) Remaining matches after spatial consistency voting

30 12 34 56 78 1: Query Region 2: Close-up version of 1 3-4: Initial matches 5-6: Matches after stop-listing 7-8: Matches after spatial consistency matching Query Frame Sample Retrieved Frame

31 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

32 Implementation Details Offline Processing: 100-150K frames/typical feature length film, Refined to 4000-6000 keyframes Descriptors are computed for stable regions in each frame Each region is assigned to a visual word Visual words over all keyframes assembled into an inverted file-structure

33 Algorithm Implementation Real-Time Process: User selects query region Visual words are identified within query region A short list of Ns = 500 keyframes retrieved based on tf-idf vector similarity Similarity is recomputed considering spatial consistency voting

34 Example Visual Search

35 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

36 Retrieval Examples Query Image A Few Retrieved Matches

37 Retrieval Examples (cont.) Query Image A Few Retrieved Matches

38 Performance of the Algorithm Tried 6 object queries (1) Red Clock (5) “Phil” Sign(6) Microphone(4) Digital Clock (3) “Frame’s” Sign (2) Black Clock

39 Performance of the Algorithm (cont.)

40 Ideally, precision = 1 for all recall values Average Precision (AP), ideally AP = 1

41 Examples of Missed Shots Extreme viewing angles Original query object Low-ranked shot

42 Examples of Missed Shots (cont.) Significant changes in scale and motion blurring Original query objectLow-ranked shot

43 Qualitative Assessment of Performance General trends  Higher precision at low recall levels  Bias towards lightly textured regions detectable by SA/MS detectors Could address these challenges by adding more covariant regions Other Difficulties  Textureless regions (e.g., mug)  Thin or wiry objects (e.g., bike)  Highly-deformable (e.g., clothing)

44 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

45 Quality of Individual Visual Words Using single visual word as query  Tests the expressiveness of the visual vocabulary Sample query  Given an object of interest, select one of the visual words from that object  Retrieve all frames that contain the visual word (no ranking)  Retrieval considered correct if contains object of interest

46 Examples of Individual Visual Words Top row: Scale-normalized close-ups of elliptical regions overlaid on query image Bottom row: Corresponding normalized regions

47 Results of Individual Word Searches Individual words are “noisy” Intuitively because words occur in multiple objects and do not cover all occurrences of the object

48 Unrealistic Realistic Require each word to occur on only one object (high precision) Growing number of objects would result in growing number of words Visual words shared across objects, with objects represented by a combination of words Quality of Individual Visual Words

49 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

50 Searching for Objects From Outside of the Movie Used external query images from the internet Manually labeled all occurrences of external queries in movies Results External Query Image No. of Occurrences Rankings of Retrieved Occurrences AP (Average Precision) Sony logo31 st, 4 th, 35 th 0.53 Hollywood sign11 st 1 Notre Dame11 st 1

51 Sample External Query Results Potential Applications

52 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

53 Challenge I: Visual Vocabularies for Very Large Scale Retrieval Current progress: 150000 frame feature movie reduced to 6000 keyframes and then processed Ultimate goal: indexing billions of online images to build a visual search engine

54 Should the vocabulary increase in size as the image archive grows? How discriminative should the words be? Generalization of images from one movie to an outside database of images? Learning a universal visual vocabulary still remains a challenge (a)(c) external images downloaded from the Internet (b)Correct retrieval frame from the movie ‘Pretty Woman’ (d) Correct retrieval from the movie ‘Charade’

55 Challenge II: Retrieval of 3D Objects Current algorithm covers successful detection despite slight changes in viewpoint, illumination, partial occlusion due to SIFT features However, 3D retrieval is fundamentally a bigger challenge

56 Proposed approach 1: Automatic association of images using temporal information Grouping front-side-back of a car in a video Possible either in query and/or database side Query-Side Matching: Associated query frames are computed and used for 3D image search Query-Side matching of associated frames

57 Proposed approach 1 (cont.) Grouping on database side: Query on a single aspect is expected to retrieve pregrouped frames associated with 3D image (Top Row) Query image (Bottom Rows) Matching frames

58 Proposed approach 2: Building an explicit 3-D model for each 3-D object in the Video Focus is more on model building than detection Only rigid objects considered

59 Challenge III: Verification using Spatial Structure Spatial consistency was helpful, but could be improved A few suggestions  Caution with using measures for rigid geometry  Reduce cost using hierarchical approach Two complementary methods  Ferrari et al. (2004): matching deformable objects  Rothganger et al. (2003): matching 3D objects

60 Verification Using Spatial Structure (cont.) Method 1 (Ferrari)  Based on spatial overlap of local regions  Requires regions to match individually and pattern of intersection between neighboring regions to be preserved Performance  Pro: Works well with deformations  Con: Computationally expensive

61 Verification Using Spatial Structure (cont.) Method 2 (Rothganger)  Based on 3-D object model  Requires consistency of local appearance descriptors and geometric consistency Performance  Pro: Object can be matched in diverse (even novel) poses  Con: 3-D model built offline, requires up to 20 images of object taken from different viewpoints

62 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

63 Conclusion Demonstrated scalable object retrieval architecture which uses  Visual vocabulary based on vector-quantized viewpoint invariant descriptors  Efficient indexing techniques from text retrieval A few notable differences between document and image bag-of-words retrieval  Spatial information  Numbers of “words” in query  Matching requirements

64 Looking forward… TinEye (May 2008)  Image-based search engine  Given a query image, searches for altered versions of that image (scaled or cropped)  1.86 billion images indexed Google Goggles (2009)  Use phone to take photo, results from the internet  Limited categories

65 Overview of the Talk Visual Search Algorithm  Offline Pre-Processing  Real-Time Query  A Few Implementation Details Performance  General Results  Testing Individual Words  Using External Images As Queries A Few Challenges and Future Directions Concluding Remarks Demo of the Algorithm

66 Demo of Retrieval Algorithm Live demonstration

67 Main References D. Lowe. Distinctive Image Features from Scale- Invariant Keypoints. International Journal of Computer Vision. 2(60):91.110, 2004. J. Sivic and A. Zisserman. Efficient visual search for objects in videos. Proc. IEEE, 96(4):548–566, 2008. W. Qian “Video Google: A Text Retrieval Approach to Object Matching in Videos.” www.mriedel.ece.umn.edu/wiki/index.php/Weikang_Qian www.mriedel.ece.umn.edu/wiki/index.php/Weikang_Qian


Download ppt "JOSEF SIVIC AND ANDREW ZISSERMAN PRESENTERS: ILGE AKKAYA & JEANNETTE CHANG MARCH 1, 2011 Efficient Visual Search for Objects in Videos."

Similar presentations


Ads by Google