Download presentation
Presentation is loading. Please wait.
1
Words and Pictures Rahul Raguram
2
Motivation Huge datasets where text and images co-occur ~ 3.6 billion photos
3
Motivation Huge datasets where text and images co-occur
4
Motivation Huge datasets where text and images co-occur Photos in the news
5
Motivation Huge datasets where text and images co-occur Subtitles
6
Motivation Interacting with large image datasets Image content ‘Blobworld’ [Carson et al., 99]
7
Motivation Interacting with large photo collections Image content ‘Blobworld’ [Carson et al., 99]
8
Motivation Interacting with large photo collections Image content ‘Blobworld’ [Carson et al., 99]
9
Motivation Interacting with large photo collections Image content Query by sketch [Jacobs et al., 95]
10
Motivation Interacting with large photo collections Image content Query by sketch [Jacobs et al., 95]
11
Motivation Interacting with large photo collections Large disparity between user needs and what technology provides (Armitage and Enser 1997, Enser 1993, Enser 1995, Markulla and Sormunen 2000) Queries based on image histograms, texture, overall appearance, etc. are vanishingly small
12
Motivation Interacting with large photo collections Text queries
13
Motivation Text and images may be separately ambiguous; jointly they tend not to be Image descriptions often leave out what is visually obvious (eg: the colour of a flower) …but often include properties that are difficult to infer using vision (eg: the species of the flower)
14
Linking words and pictures: Applications Automated image annotation Auto illustration Browsing support tiger cat mouth teeth “statue of liberty”
15
Learning the Semantics of Words and Pictures Barnard and Forsyth, ICCV 2001
16
Key idea Model the joint distribution of words and image features Joint probability model for text and image features Random bits Impossible Keywords: apple tree Unlikely Keywords: sky water sun Reasonable Slide credit: David Forsyth
17
Input Representation Extract keywords Segment the image into a set of ‘blobs’
18
EM revisited: Image segmentation Examples from: http://www.eecs.umich.edu/~silvio/teaching/
19
EM revisited: Image segmentation Image Segment 1 Segment 2. Segment k Generative model Problem: You don’t know the parameters, the mixing weights, or the segmentation
20
EM revisited: Image segmentation Image If you knew the segmentation, then you could find the parameters easily Compute maximum likelihood estimates for Fraction of the image in the segment gives the mixing weight
21
EM revisited: Image segmentation Image If you knew the segmentation, then you could find the parameters easily If you knew the parameters, you could easily determine the segmentation Solution: iterate Calculate the posteriors
22
EM revisited: Image segmentation Image from: http://www.ics.uci.edu/~dramanan/teaching/
23
Input Representation Segment the image into a set of ‘blobs’ Each region/blob represented by a vector of 40 features (size, position, colour, texture, shape)
24
Modeling image dataset statistics Generative, hierarchical model Extension of Hofmann’s model for text (1998) Each node emits blobs and words Higher nodes emit more general words and blobs sky Middle nodes emit moderately general words and blobs sun Lower nodes emit more specific words and blobs waves
25
Modeling image dataset statistics Generative, hierarchical model Extension of Hofmann’s model for text (1998) Following a path from root to leaf generates image and associated text sky sun waves sun sky waves
26
Modeling image dataset statistics Generative, hierarchical model Extension of Hofmann’s model for text (1998) Each cluster is associated with a path from the root to a leaf Cluster of images
27
Modeling image dataset statistics Generative, hierarchical model Extension of Hofmann’s model for text (1998) Each cluster is associated with a path from the root to a leaf sky sun, sea wavesrocks sun sea sky waves sun sea sky rocks Adjacent clusters
28
Modeling image dataset statistics D = blobs words Each cluster is associated with a path from a leaf to the root Conditional independence of the items Nodes along the path from leaf to root
29
Modeling image dataset statistics For blobs For words Tabulate word frequencies
30
Modeling image dataset statistics Model fitting: EM Missing data is path, nodes that generated each data element Two hidden variables: If path, node were known for each data element, easy to get maximum likelihood estimate of parameters Given parameter estimate, path, node easy to figure out document d is in cluster c item i of document d was generated at level l
31
Results Clustering Does text+image clustering have an advantage? Only text
32
Results Clustering Does text+image clustering have an advantage? Only blob features
33
Results Clustering Does text+image clustering have an advantage? Both text and image segments
34
Results Clustering Does text+image clustering have an advantage? User study: Generate 64 clusters for 3000 images Generate 64 random clusters from the same images Present random cluster to user, ask to rate coherence (yes/no) 94% accuracy
35
Results Image search Supply a combination of text + image features Approach: compute for each candidate image, the probability of emitting the query items Q – set of query items d – candidate document
36
Results Image search Image credit: David Forsyth
37
Results Image search Image credit: David Forsyth
38
Results Image search Image credit: David Forsyth
39
Results Auto-annotation Compute:
40
Results Auto-annotation Quantitative performance: Use 160 Corel CDs, each with 100 images (grouped by theme) Select 80 of the CDs, split into training (75%) and test (25%). Remaining 80 CDs are a ‘harder’ test set Model scoring: n – number of words for the image r – number of words predicted correctly w – number of words predicted incorrectly N – vocabulary size All words that exceed a threshold are predicted
41
Results Auto-annotation Quantitative performance: Use 160 Corel CDs, each with 100 images (grouped by theme) Select 80 of the CDs, split into training (75%) and test (25%). Remaining 80 CDs are a ‘harder’ test set Model scoring: n – number of words for the image r – number of words predicted correctly Model predicts n words Can do surprisingly well just by using the empirical word frequency!
42
Results Auto-annotation Quantitative performance: Score of 0.1 indicates roughly 1 out of every 3 words is correctly predicted (vs. 1 out of 6 for the empirical model)
43
Names and Faces in the News Berg et al., CVPR 2004
44
Motivation President George W. Bush makes a statement in the Rose Garden while Secretary of Defense Donald Rumsfeld looks on, July 23, 2003. Rumsfeld said the United States would release graphic photographs of the dead sons of Saddam Hussein to prove they were killed by American troops. Photo by Larry Downing/Reuters
45
Motivation President George W. Bush makes a statement in the Rose Garden while Secretary of Defense Donald Rumsfeld looks on, July 23, 2003. Rumsfeld said the United States would release graphic photographs of the dead sons of Saddam Hussein to prove they were killed by American troops. Photo by Larry Downing/Reuters
46
Motivation Organize news photographs for browsing and retrieval Build a large ‘real-world’ face dataset Datasets captured in lab conditions do not truly reflect the complexity of the problem
47
Motivation Organize news photographs for browsing and retrieval Build a large ‘real-world’ face dataset Datasets captured in lab conditions do not truly reflect the complexity of the problem In many traditional face datasets, it’s possible to get excellent performance by using no facial features at all (Shamir, 2008)
48
Motivation Top left 100×100 pixels of the first 10 individuals in the color FERET dataset. The IDs of the subjects are listed right to the images
49
Dataset Download news photos and captions ~500,000 images from Yahoo News, over a period of two years Run a face detector 44,773 faces Resized to 86x86 pixels Extract names from the captions Identify two or more capitalized words followed by a present tense verb Associate every face in the image with every detected name Goal is to label each face detector output with the correct name
50
Dataset Properties Diverse Large variation in lighting and pose Broad range of expressions
51
Dataset Properties Diverse Large variation in lighting and pose Broad range of expressions Name frequencies follow a long tailed distribution Doctor Nikola shows a fork that was removed from an Israeli woman who swallowed it while trying to catch a bug that flew in to her mouth, in Poriah Hospital northern Israel July 10, 2003. Doctors performed emergency surgery and removed the fork. (Reuters) President George W. Bush waves as he leaves the White House for a day trip to North Carolina, July 25, 2002. A White House spokesman said that Bush would be compelled to veto Senate legislation creating a new department of homeland security unless changes are made. (Kevin Lamarque/Reuters)
52
Preprocessing Rectify faces to canonical position Train 5 SVMs as feature detectors Corners of left and right eyes, tip of the nose, corners of the mouth Use 150 hand-clicked faces to train the SVMs For a test image, run the SVMs over the entire image Produces 5 feature maps Detect maximal outputs in the 5 maps, and estimate the affine transformation to the canonical pose Image credit: Y. J. Lee
53
Preprocessing Rectify faces to canonical position Train 5 SVMs as feature detectors Corners of left and right eyes, tip of the nose, corners of the mouth Use 150 hand-clicked faces to train the SVMs For a test image, run the SVMs over the entire image Produces 5 feature maps Detect maximal outputs in the 5 maps, and estimate the affine transformation to the canonical pose Reject images with poor rectification scores
54
Preprocessing Rectify faces to canonical position Train 5 SVMs as feature detectors Corners of left and right eyes, tip of the nose, corners of the mouth Use 150 hand-clicked faces to train the SVMs For a test image, run the SVMs over the entire image Produces 5 feature maps Detect maximal outputs in the 5 maps, and estimate the affine transformation to the canonical pose Reject images with poor rectification scores This leaves 34,623 images Throw out images with more than 4 names 27,742 faces
55
Face representation 86x86 images – 7396 dimensional vectors However, relatively few 7396 dimensional vectors actually correspond to valid face images We want to effectively model the subspace of valid face images Slide credit: S. Lazebnik
56
Face representation We want to construct a low-dimensional linear subspace that best explains the variation in the set of face images Slide credit: S. Lazebnik
57
Principal Component Analysis (PCA) Given N data points x 1, …,x N in R d Consider the projection onto a 1 dimensional subspace Denoted by d-dimensional unit vector u 1 Projection of each data point: u 1 T x n Mean of the projected data where Variance of the projected data Define covariance matrix Formulation: C. Bishop
58
Principal Component Analysis (PCA) Want to maximize the projected variance Alternate formulation: minimize sum- of-square errors Maximize subject to Use Lagrange multipliers u 1 must be an eigenvector of S Choose maximum eigenvalue to maximize variance Image, formulation: C. Bishop
59
Principal Component Analysis (PCA) The direction that captures the maximum covariance of the data is the eigenvector corresponding to the largest eigenvalue of the data covariance matrix Furthermore, the top k orthogonal directions that capture the most variance of the data are the k eigenvectors corresponding to the k largest eigenvalues Slide credit: S. Lazebnik
60
Limitations of PCA PCA assumes that the data has a Gaussian distribution (mean µ, covariance matrix Σ) Slide credit: S. Lazebnik
61
Limitations of PCA The direction of maximum variance is not always good for classification Image credit: C. Bishop
62
Limitation #1 Shape of the data not modeled well by the linear principal components
63
The return of the kernel trick Basic idea: express conventional PCA in terms of dot products From before: For convenience, assume that you’ve subtracted off the mean from each vector Consider a nonlinear function Φ(x) mapping into M-dimensions (M>D) Assume Covariance matrix Formulation: C. Bishop
64
The return of the kernel trick Covariance matrix in feature space Now MxM Substituting for C Scalar values The eigenvectors v i can be written as a linear combination of the Φ(x n ) Formulation: C. Bishop
65
Key step: express this in terms of the kernel function Multiply both sides by Φ T (x l ) Projection of a point onto eigenvector i The return of the kernel trick Formulation: C. Bishop
66
Kernel PCA Image credit: C. Bishop
67
Limitation #2 The direction of maximum variance is not always good for classification Image credit: C. Bishop
68
Linear Discriminant Analysis (LDA) Goal: Perform dimensionality reduction while preserving as much of the class discriminatory information as possible Try to find directions along which the classes are best separated Capable of distinguishing image variation due to identity from variation due to other sources such as illumination and expression
69
Linear Discriminant Analysis (LDA) Define inter- and intra-class scatter matrices LDA computes a projection that maximizes the ratio by solving the generalized eigenvalue problem W – intra-class B – inter-class
70
Class labels for LDA For the unsupervised names and faces dataset, you don’t have true labels Use proxy for labeled training data Images from the dataset with only one detected face and one detected name Observation: Using LDA on top of the space found by kernel PCA improves performance significantly
71
Clustering faces Now that we have a representation for faces, the goal is to ‘clean up’ this dataset Modified k-means clustering Obama Bush Clinton Saddam
72
Clustering faces Now that we have a representation for faces, the goal is to ‘clean up’ this dataset Modified k-means clustering Obama Bush Clinton Saddam
73
Clustering faces Now that we have a representation for faces, the goal is to ‘clean up’ this dataset Modified k-means clustering Obama Bush Clinton Saddam x x x x
74
Clustering faces Now that we have a representation for faces, the goal is to ‘clean up’ this dataset Modified k-means clustering x x x x Bush Saddam
75
Pruning clusters Remove clusters with < 3 faces This leaves 19,355 images For every data point, compute a likelihood score Remove points with low likelihood k – number of nearest neighbours being considered k i – number of n.n. that are in cluster i n – total number of points in the dataset n i – total number of points in cluster i
76
Pruning clusters For various thresholds:
77
Merging clusters Merge clusters with different names that correspond to a single person Defense Donald Rumsfeld and Donald Rumsfeld Or Colin Powell and Secretary of State Look at distance between the means in discriminant space If below a threshold, merge
78
Merging clusters Image credit: David Forsyth
79
Results
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.