Hsiao-Yu Chiang Xiaoya Li Ganyu “Bruce” Xu Pinshuo Ye Zhanyuan Zhang

Slides:



Advertisements
Similar presentations
Advanced topics.
Advertisements

Stacking RBMs and Auto-encoders for Deep Architectures References:[Bengio, 2009], [Vincent et al., 2008] 2011/03/03 강병곤.
Structure learning with deep neuronal networks 6 th Network Modeling Workshop, 6/6/2013 Patrick Michl.
Unsupervised Learning With Neural Nets Deep Learning and Neural Nets Spring 2015.
Autoencoders Mostafa Heidarpour
Handwritten Character Recognition Using Block wise Segmentation Technique (BST) in Neural Network 47th Annual Convention of the Computer Society of India.
Dynamic Background Learning through Deep Auto-encoder Networks Pei Xu 1, Mao Ye 1, Xue Li 2, Qihe Liu 1, Yi Yang 2 and Jian Ding 3 1.University of Electronic.
Deep Learning Overview Sources: workshop-tutorial-final.pdf
Jingyuan Zhang 1, Bokai Cao 1, Sihong Xie 1, Chun-Ta Lu 1, Philip S. Yu 1,2, Ann B. Ragin 3 Identifying Connectivity Patterns for Brain Diseases via Multi-side-view.
Conditional Generative Adversarial Networks
Dimensionality Reduction and Principle Components Analysis
Generative Adversarial Nets
Unsupervised Learning of Video Representations using LSTMs
Generalization and Equilbrium in Generative Adversarial Nets (GANs)
Generative Adversarial Network (GAN)
Handwritten Digit Recognition Using Stacked Autoencoders
Deep Neural Net Scenery Generation
Automatic Lung Cancer Diagnosis from CT Scans (Week 2)
Automatic Lung Cancer Diagnosis from CT Scans (Week 4)
fMRI and neural encoding models: Voxel receptive fields (continued)
Deep learning possibilities for GERDA
Photorealistic Image Colourization with Generative Adversarial Nets
Restricted Boltzmann Machines for Classification
ICS 491 Big Data Analytics Fall 2017 Deep Learning
Unsupervised Learning: Principle Component Analysis
Chaoyun Zhang, Xi Ouyang, and Paul Patras
CSCI 5922 Neural Networks and Deep Learning: NIPS Highlights
Shunyuan Zhang Nikhil Malik
Structure learning with deep autoencoders
Unsupervised Learning and Autoencoders
Unsupervised Conditional Generation
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Presenter: Hajar Emami
Neural Language Model CS246 Junghoo “John” Cho.
PixelGAN Autoencoders
Towards Understanding the Invertibility of Convolutional Neural Networks Anna C. Gilbert1, Yi Zhang1, Kibok Lee1, Yuting Zhang1, Honglak Lee1,2 1University.
(boy, that’s a long title)
Deep learning Introduction Classes of Deep Learning Networks
Goodfellow: Chapter 14 Autoencoders
PCA is “an orthogonal linear transformation that transfers the data to a new coordinate system such that the greatest variance by any projection of the.
Face Recognition with Neural Networks
Generative Adversarial Network
Image recognition: Defense adversarial attacks
David Healey BYU Capstone Course 15 Nov 2018
Image to Image Translation using GANs
GAN Applications.
Papers 15/08.
Representation Learning with Deep Auto-Encoder
Lip movement Synthesis from Text
View Inter-Prediction GAN: Unsupervised Representation Learning for 3D Shapes by Learning Global Shape Memories to Support Local View Predictions 1,2 1.
Natural Language to SQL(nl2sql)
Autoencoders hi shea autoencoders Sys-AI.
Word2Vec.
Neural networks (3) Regularization Autoencoder
Autoencoders Supervised learning uses explicit labels/correct output in order to train a network. E.g., classification of images. Unsupervised learning.
LECTURE 34: Autoencoders
Video Imagination from a Single Image with Transformation Generation
one-input multi-output architecture
P Combined Molecular and Radiological Evaluation Unveils Three Subtypes of Disease Progression to a Third Generation EGFR TKI  Y. Zhang, Q. Zhou,
Angel A. Cantu, Nami Akazawa Department of Computer Science
End-to-End Facial Alignment and Recognition
Cengizhan Can Phoebe de Nooijer
Fig. 2. Examples showing the ability of deep learning to generate realistic fake images. (a) Representative test images from the trained network for generating.
Text-to-speech (TTS) Traditional approaches (before 2016) Neural TTS
End-to-End Speech-Driven Facial Animation with Temporal GANs
Week 7 Presentation Ngoc Ta Aidean Sharghi
CSC 578 Neural Networks and Deep Learning
CRCV REU 2019 Aaron Honculada.
Tensorflow in Deep Learning
Goodfellow: Chapter 14 Autoencoders
Presentation transcript:

Hsiao-Yu Chiang Xiaoya Li Ganyu “Bruce” Xu Pinshuo Ye Zhanyuan Zhang Progress Report Hsiao-Yu Chiang Xiaoya Li Ganyu “Bruce” Xu Pinshuo Ye Zhanyuan Zhang

Autoencoder Autoencoder is an unsupervised neural network that learns dimension reduction and reconstruction from low-dimensional representation Deepfake is an application of autoencoder that trains two autoencoders and connect the output of one encoder to the input of the other decoder Deepfake lacks measurement of face swapping quality: introduce a discriminative network and form a Generative Adversarial Network

Beyond faceswapping Lack quantitative measurement of face swap quality Train a second disciminative network to tell if a image is “fake” Combine deepfake with discriminative network to form Generative Adversarial Network (GAN)

References https://en.wikipedia.org/wiki/Autoencoder https://en.wikipedia.org/wiki/Generative_adversarial_network https://github.com/deepfakes/faceswap