Download presentation
Presentation is loading. Please wait.
1
Presented by Wanxue Dong
Who’s in the Picture Presented by Wanxue Dong
2
Research Question: Who is depicted in the associated image?
Find a correspondence between the faces and names. Why this is important? It has been used to browse museum collections and organize large image collections To take the large collection of news images and captions as semi- supervised input and produce a fully supervised dataset of faces labeled with names
3
Linking a face and language model with EM (Expectation Maximization)
Consider name assignment as a hidden variable problem where the hidden variables are the correct name-face correspondences for each picture. EM (Expectation Maximization) procedure: iterates between computing the expected values of the set of face-name correspondences and updating the face clusters and language model given the correspondences.
5
Generative Model
6
Name Assignment The likelihood of picture xi, under assignment ai, of names to faces under our generative model is α indexes into the names that are pictured σ(α) indexes into the faces assigned to the pictured names β indexes into the names that are not pictured γ indexes into the faces without assigned names
7
Name Assignment EM procedure:
E – update the Pij according to the normalized probability of picture i with assignment j. M – Maximize the parameters P(face | name) and P(pictured | context)
8
Modeling the Appearance of Faces P(face | name)
P(face | name) gaussians with fixed covariance Need a representation for faces in a feature space Rectify all faces to a canonical pose Five support vector machines to train the feature detectors Use kPCA to reduce the dimensionality of data, compute linear discriminants
9
Language Model P(pictured | context)
Assigns a probability to each name based on its context within the caption. The distributions learned using counts of how often each context appears describing an assigned name and unassigned name One distribution for each context cue modeled independently
10
Comparison of EM & MM procedure
The Maximal Assignment procedure: M1 – set of maximal Pij to 1 and all others to 0 M2 – maximize the parameters P(face | name) and P(pictured | context) For both methods, incorporating a language model improves their respective clustering greatly.
11
Data Sets Consisting of approximately half a million news pictures and captions from Yahoo News over a period of roughly two years Faces: 44,773 large well detected face images Names: use an open source named entity recognizer to detect proper names Scale: reject face images that cannot be rectified satisfactorily, leaving 34,623. Finally, concentrate on images within whose captions could be detected proper names, leaving 30,281.
12
Experiments’ Results Results of applying learned language model to a test set of captions (text alone) Test set: hand labeled each detected name with IN/OUT based on whether the referred name was pictured within the corresponding picture Test how well the language model could predict those labels
13
Conclusion This study coupled language and images, using language to learn about images and images to learn about language Analyzing language more carefully can produce a much better clustering A natural language classifier that can be used to determine who is pictured from text alone
14
Critiques and Future Work
Test set is limited as all hand labeled There is no other model comparison Next step: learn a language model for free text on a webpage to improve google image search results
15
Reference Tamara L. Berg, Alexander C. Berg, Jaety Edwards, David A. Forsyth, Who's in the Picture?, Neural Information Processing Systems (NIPS), 2004. Thank you!
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.