Download presentation
Presentation is loading. Please wait.
Published byสีห์ศักดิ์ บราวน์ Modified over 5 years ago
1
CS 2770: Computer Vision Generative Adversarial Networks
Prof. Adriana Kovashka University of Pittsburgh April 16, 2019
2
Plan for this lecture Generative models: What are they?
Technique: Generative Adversarial Networks Applications Conditional GANs Cycle-consistency loss Dealing with sparse data, progressive training
3
Supervised vs Unsupervised Learning
Data: x Just data, no labels! Goal: Learn some underlying hidden structure of the data Examples: Clustering, dimensionality reduction, feature learning, density estimation, etc. K-means clustering Lecture 13 - 10 Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
4
Supervised vs Unsupervised Learning
Data: x Just data, no labels! Goal: Learn some underlying hidden structure of the data Examples: Clustering, dimensionality reduction, feature learning, density estimation, etc. Autoencoders (Feature learning) Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
5
Generative Models Lecture 13 -
Training data ~ pdata(x) Generated samples ~ pmodel(x) Want to learn pmodel(x) similar to pdata(x) Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
6
Generative Models Lecture 13 -
Training data ~ pdata(x) Generated samples ~ pmodel(x) Want to learn pmodel(x) similar to pdata(x) Addresses density estimation, a core problem in unsupervised learning Several flavors: Explicit density estimation: explicitly define and solve for pmodel(x) Implicit density estimation: learn model that can sample from pmodel(x) w/o explicitly defining it Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
7
Why Generative Models? Lecture 13 -
- Realistic samples for artwork, super-resolution, colorization, etc. Generative models can be used to enhance training datasets with diverse synthetic data Generative models of time-series data can be used for simulation Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Adapted from Serena Young
8
Taxonomy of Generative Models
Direct GAN Generative models Explicit density Implicit density Markov Chain Tractable density Approximate density GSN Fully Visible Belief Nets NADE MADE PixelRNN/CNN Variational Markov Chain Variational Autoencoder Boltzmann Machine Figure copyright and adapted from Ian Goodfellow, Tutorial on Generative Adversarial Networks, 2017. Change of variables models (nonlinear ICA) Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
9
Generative Adversarial Networks
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Problem: Want to sample from complex, high-dimensional training distribution. No direct way to do this! Solution: Sample from a simple distribution, e.g. random noise. Learn transformation to training distribution. Q: What can we use to represent this complex transformation? Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
10
Generative Adversarial Networks
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Problem: Want to sample from complex, high-dimensional training distribution. No direct way to do this! Solution: Sample from a simple distribution, e.g. random noise. Learn transformation to training distribution. Output: Sample from training distribution Q: What can we use to represent this complex transformation? Generator Network A: A neural network! Input: Random noise z Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
11
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
12
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Real or Fake Discriminator Network Fake Images (from generator) Real Images (from training set) Generator Network Random noise z Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
13
Adversarial Networks Framework
D tries to output 1 Differentiable function D x sampled from data D tries to output 0 Differentiable function D x sampled from model Differentiable function G Input noise Z Ian Goodfellow
14
Adversarial Networks Framework
Generator 𝑥 ~ 𝐺(𝑧) Discriminator Real vs. Fake [Goodfellow et al. 2014] Jun-Yan Zhu
15
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Train jointly in minimax game Minimax objective function: Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
16
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Train jointly in minimax game Discriminator outputs likelihood in (0,1) of real image Minimax objective function: Discriminator output for real data x Discriminator output for generated fake data G(z) Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
17
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Train jointly in minimax game Discriminator outputs likelihood in (0,1) of real image Minimax objective function: Discriminator output for real data x Discriminator output for generated fake data G(z) Discriminator (θd) wants to maximize objective such that D(x) is close to 1 (real) and D(G(z)) is close to 0 (fake) Generator (θg) wants to minimize objective such that D(G(z)) is close to 1 (discriminator is fooled into thinking generated G(z) is real) Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
18
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Minimax objective function: Alternate between: 1. Gradient ascent on discriminator 2. Gradient descent on generator Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
19
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Minimax objective function: Alternate between: 1. Gradient ascent on discriminator Gradient signal dominated by region where sample is already good 2. Gradient descent on generator When sample is likely fake, want to learn from it to improve generator. But gradient in this region is relatively flat! In practice, optimizing this generator objective does not work well! 11 Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
20
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Minimax objective function: Alternate between: 1. Gradient ascent on discriminator 2. Instead: Gradient ascent on generator, different objective Instead of minimizing likelihood of discriminator being correct, now maximize likelihood of discriminator being wrong. Same objective of fooling discriminator, but now higher gradient signal for bad samples => works much better! Standard in practice. High gradient signal Low gradient signal Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
21
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Putting it together: GAN training algorithm Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
22
Training GANs: Two-player game
Ian Goodfellow et al., “Generative Adversarial Nets”, NIPS 2014 Generator network: try to fool the discriminator by generating real-looking images Discriminator network: try to distinguish between real and fake images Real or Fake Discriminator Network Fake Images (from generator) Random noise Real Images (from training set) Generator Network After training, use generator network to generate new images z Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
23
GAN training is challenging
Vanishing gradient – when discriminator is very good Mode collapse – too little diversity in the samples generated Lack of convergence because hard to reach Nash equilibrium Loss metric doesn’t always correspond to image quality; Frechet Inception Distance (FID) is a decent choice
24
Alternative loss functions
25
WGAN vs GAN
26
Tips and tricks Use batchnorm, ReLU Regularize norm of gradients
Use one of the new loss functions Add noise to inputs or labels Append image similarity to avoid mode collapse Use labels when available (CGAN) …
27
Generative Adversarial Nets: Interpretable Vector Math
Radford et al, ICLR 2016 Smiling woman Neutral woman Neutral man Samples from the model Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
28
Generative Adversarial Nets: Interpretable Vector Math
Radford et al, ICLR 2016 Smiling woman Neutral woman Neutral man Samples from the model Average Z vectors, do arithmetic Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
29
Generative Adversarial Nets: Interpretable Vector Math
Radford et al, ICLR 2016 Smiling woman Neutral woman Neutral man Smiling Man Samples from the model Average Z vectors, do arithmetic Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
30
Generative Adversarial Nets: Interpretable Vector Math
Glasses man No glasses man No glasses woman Radford et al, ICLR 2016 Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
31
Generative Adversarial Nets: Interpretable Vector Math
Glasses man No glasses man No glasses woman Woman with glasses Radford et al, ICLR 2016 Lecture 13 - Fei-Fei Li & Justin Johnson & Serena Yeung May 18, 2017 Serena Young
32
What is in this image? (Yeh et al., 2016) Ian Goodfellow
33
Generative modeling reveals a face
(Yeh et al., 2016) Ian Goodfellow
34
Artificial Fashion: vue.ai
✓ Ian Goodfellow
35
Celebrities Who Never Existed
(Goodfell ow 2017) Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
36
Creative Adversarial Networks
(Elgammal et al., 2017) Ian Goodfellow
37
GANs for Privacy (Action Detection)
Ren et al., “Learning to Anonymize Faces for Privacy Preserving Action Detection”, ECCV 2018
38
Plan for this lecture Generative models: What are they?
Technique: Generative Adversarial Networks Applications Conditional GANs Cycle-consistency loss Dealing with sparse data, progressive training
39
Conditional GANs
40
GANs D G 𝐺: generate fake samples that can fool 𝐷
real or fake? Discriminator x G(x) D Generator G 𝐺: generate fake samples that can fool 𝐷 𝐷: classify fake samples vs. real images [Goodfellow et al. 2014] Jun-Yan Zhu
41
Conditional GANs G D x G(x) real or fake pair ?
Adapted from Jun-Yan Zhu
42
Edges → Images Input Output Input Output Input Output
Edges from [Xie & Tu, 2015] Pix2pix / CycleGAN
43
Sketches → Images Trained on Edges → Images Input Output Input Output
Data from [Eitz, Hays, Alexa, 2012] Pix2pix / CycleGAN
44
https://affinelayer.com/pixsrv/
#edges2cats [Christopher Hesse] @gods_tail @matthematician Vitaly Ivy Pix2pix / CycleGAN
45
Paired Unpaired … … … Jun-Yan Zhu
46
… … … Cycle Consistency Discriminator D Y : 𝐿 𝐺𝐴𝑁 𝐺 𝑥 ,𝑦
Real zebras vs. generated zebras Zhu et al., “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks”, ICCV 2017
47
… … Cycle Consistency Discriminator D Y : 𝐿 𝐺𝐴𝑁 𝐺 𝑥 ,𝑦
Real zebras vs. generated zebras Discriminator D X : 𝐿 𝐺𝐴𝑁 𝐹 𝑦 ,𝑥 Real horses vs. generated horses Zhu et al., “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks”, ICCV 2017
48
Cycle Consistency Forward cycle loss: F G x −x 1 x G(x) F(G x )
Zhu et al., “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks”, ICCV 2017
49
Cycle Consistency Forward cycle loss: F G x −x 1 G(x) F(G x ) x
Small cycle loss Large cycle loss Helps cope with mode collapse Zhu et al., “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks”, ICCV 2017
50
Training Details: Objective
Zhu et al., “Unpaired Image-To-Image Translation Using Cycle-Consistent Adversarial Networks”, ICCV 2017
51
Input Monet Van Gogh Cezanne Ukiyo-e Pix2pix / CycleGAN
52
Pix2pix / CycleGAN
53
Pix2pix / CycleGAN
54
Pix2pix / CycleGAN
55
Pix2pix / CycleGAN
56
Pix2pix / CycleGAN
57
StarGAN Choi et al., “StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation”, CVPR 2018
58
Plan for this lecture Generative models: What are they?
Technique: Generative Adversarial Networks Applications Conditional GANs Cycle-consistency loss Dealing with sparse data, progressive training
59
Generating with little data for ads
Faces are persuasive and carry meaning/sentiment We learn to generate faces appropriate for each ad category Because our data is so diverse yet limited in count, standard approaches that directly model pixel distributions don’t work well Beauty Clothing Safety Soda Cars Human Rights Self Esteem Chocolate Domestic Violence Thomas and Kovashka, BMVC 2018
60
Generating with little data for ads
Instead we model the distribution over attributes for each category (e.g. domestic violence ads contain “black eye”, beauty contains “red lips”) Generate an image with the attributes of an ad class Model attributes w/ help from external large dataset 128x128x3 Input 32x32x16 16x16x32 8x8x64 4x4x128 2x2x256 512 64x64x8 N 150 1024 Output Encoder Sampling Decoder 𝟏𝟎𝟎 (𝝈) 𝟏𝟎𝟎 (𝝁) Embedding Facial Expressions (10-D) Facial Attributes (40-D) Latent (100-D) Latent captures non-semantic appearance properties (colors, etc.) Externally Enforced Semantics Facial attributes: <Attractive, Baggy eyes, Big lips, Bushy eyebrows, Eyeglasses, Gray hair, Makeup, Male, Pale skin, Rosy cheeks, etc.> Facial expressions: <Anger, Contempt, Disgust, Fear, Happy, Neutral, Sad, Surprise> + Valence and Arousal scores Thomas and Kovashka, BMVC 2018
61
Generating with little data for ads
Original Face Transform Reconstruction Beauty Clothing D.V. Safety Soda Alcohol StarGAN (C) StarGAN (T) Ours Conditional Latent Thomas and Kovashka, BMVC 2018
62
Stagewise generation Singh et al., “FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery”, CVPR 2019
63
Stagewise generation Johnson et al., “Image Generation from Scene Graphs”, CVPR 2018
64
Progressive generation
64×64 4×4 G Latent D Real or fake Generated image . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
65
Progressive generation
D 64×64 4×4 64×64 4×4 Latent Real or fake Generated image . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
66
Progressive generation
D Latent 4×4 1024×1024 1024×1024 4×4 Real or fake Generated image . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
67
Progressive generation
4×4 1024×1024 G D Latent Real or fake There’s waves everywhere! But where’s the shore? . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
68
Progressive generation
D Latent 4×4 64×64 1024×1024 1024×1024 64×64 4×4 Real or fake There it is! . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
69
Progressive generation
D Latent 4×4 4×4 Real or fake . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
70
Progressive generation
D Latent 4×4 4×4 Real or fake . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
71
Progressive generation
D Latent 4×4 8×8 8×8 4×4 Real or fake . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
72
Progressive generation
D Latent 4×4 1024×1024 1024×1024 4×4 Real or fake . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
73
Progressive generation
4×4 2x 8×8 Replicated block Nearest-neighbor upsampling 2x 16×16 3×3 convolution . 2x 32×32 Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
74
Progressive generation
4×4 toRGB 4×4 2x 8×8 8×8 2x 16×16 32×32 1×1 convolution . Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
75
Progressive generation
4×4 4×4 toRGB 2x 8×8 8×8 toRGB 2x 16×16 16×16 . 2x 32×32 32×32 Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
76
Progressive generation
4×4 toRGB 4×4 2x 8×8 + 8×8 toRGB 2x 16×16 16×16 Linear crossfade . 2x 32×32 32×32 Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
77
Progressive generation
32×32 0.5x 16×16 8×8 4×4 fromRGB D 4×4 4×4 2x 8×8 8×8 toRGB 2x 16×16 16×16 . 2x 32×32 32×32 Karras et al., “Progressive Growing of GANs for Improved Quality, Stability, and Variation”, ICLR 2018
78
What’s next algorithmically? And what are some social implications?
79
“Deepfakes”
80
You can be anyone you want…
Karras et al., “A Style-Based Generator Architecture for Generative Adversarial Networks”,
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.