Download presentation
Presentation is loading. Please wait.
Published byMilton Warren Modified over 9 years ago
1
Who Do I Look Like? Determining Parent-Offspring Resemblance via Gated Autoencoders Afshin Dehghan Enrique G. Ortiz Ruben Villegas Mubarak Shah
2
Outline Introduction Autoencoder Gated Autoencoder Experiments Conclusions and Future Work
3
Introduction “Who does he look like more, the father or the mother?” In this paper, we aim to bridge the gap between findings in the social sciences and computer vision to answer the age-old question, “Who do I look like?”
4
Introduction
5
The studies have corroborated that offspring do in fact resemble parents more than random strangers and at different ages may resemble a particular parent more. Using a new generative and discriminative model based on the gated autoencoder. This paper ‘s discovery : 1. Optimal features 2. Metrics relating a parent and offspring via gated autoencoders. 3. Enhances the relationship of a parent-offspring pair converging on a more discriminative function.
6
Introduction We aim to answer two key questions from the perspective of a computer: 1. Do offspring resemble their parents? 2. Do offspring resemble one parent more than the other? Given the answers to these questions, we can conclude why computer vision discover which facial features lead to the best performance in parent- offspring recognition. We believe familial resemblance can aid in reuniting parents with their missing children.
7
Autoencoder This deep learning architecture keeps the most important information. This property allows us to learn the most discriminative features that we refer to as ‘genetic features’. N y represent the dimension of the image patch. N m represent number of hidden units.
8
Gated Autoencoder(1) Why use Gated Autoencoder? We are interested in encoding the relationship between a pair of images. The final output of our system is a relatedness or resemblance statistic, which we can use for classification.
9
Gated Autoencoder(2)- Generative Training Weights z k as mapping units.
10
Gated Autoencoder(3)- Generative Training N x, N y and N z are the dimension of x, y and z. F is the number of hidden units. Minimize the loss function
11
Gated Autoencoder(4)- Discriminative Training Ground-truth labels: 1. 0 - not same family 2. 1 - same family Discriminative objective function: Final hybrid model: The best is 0.4.
12
Experiments(1) Explore our two main questions: 1. Do offspring resemble their parents? 2. Do offspring resemble one parent more than the other? For all experiments involving the gated autoencoder method. Extract 8x8 patches from an RGB image of size 64x64. Set the number of filters to F = 160 and the number of mapping units to N z =40. The parameter is found through cross validation, which is 0.4. Using SVMs for classification. Using the RBF kernel with parameters selected via 4-fold cross validation.
13
Do offspring resemble their parents?
14
Experiments(2) - Dataset Family 101[11] dataset 1. 206 nuclear families 2. 101 unique family trees 3. 14,816 images We select 101 unique, nuclear families for our experiments. We split the set into 50 training families and 51 testing families for a total of 11,300 images. [11] R. Fang, A. C. Gallagher, T. Chen, and A. Loui. Kinship Classification by Modeling Facial Feature Heredity. IEEE ICIP, 2013.
15
Experiments(3) Apply the discriminative feature learning technique on the training relationships between all possible. 1. mother-daughter (MD) 2. mother-son (MS) 3. father-daughter(FD) 4. father-son (FS)
16
Experiments(4)
17
Do offspring resemble one parent more than the other?
18
Experiments(5) Daughters resemble their mother more. Sons resemble their fathers more. The conclusion is aligned with anthropological studies.
19
Experiments(6) – Genetic Features We examine our method with respect to three factors: 1. How our discovered features compare to those from anthropological studies. 2. How well our genetic features outscore the state-of-the-art in metric learning. 3. How well the feature models generalize.
20
Experiments(7) - Computer vs. Anthropology
21
Experiments(8) - Face Verification How well our method performs against existing metric learning techniques. Determine whether fusing the findings from anthropological studies with our method improves performance. KinFaceW [19] dataset, which is comprised of two sets. 1. KinFaceW-I with 533 parent-offspring pairs from different images. 2. KinFaceW-II with 1,000 parent-offspring pairs from the same image. KinFaceW-I KinFaceW-II [19] J. Lu, X. Zhou, Y.-P. Tan, Y. Shang, and J. Zhou. Neighborhood Repulsed Metric Learning for Kinship Verification. IEEE TPAMI, 2013.
22
Experiments(9) - KinFaceW-I Metric learners: 1. Information-Theoretic Metric Learning (ITML) 2. Neighborhood Repulsed Metric Learning (NRML)
23
Experiments(10) - KinFaceW-II
24
Experiments(11) - 5-fold crossvalidation
25
Conclusions and Future Work Using this method, we uncover three key insights that bridge the gap between anthropological studies and computer vision. 1. Offspring resemble their parents with a probability higher than chance. 2. Female offspring resemble their mothers more often than their fathers, while a male offspring only slightly favor the father. 3. The algorithm discovers features similar to those found in anthropological studies.
26
References Who's Your Daddy? Who's Your Daddy? KinFaceW KinFaceW
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.