Presentation is loading. Please wait.

Presentation is loading. Please wait.

Improving Generative Adversarial Network

Similar presentations


Presentation on theme: "Improving Generative Adversarial Network"— Presentation transcript:

1 Improving Generative Adversarial Network
Not mention (energy-based): BE GAN Gang of GANs: Generative Adversarial Networks with Maximum Margin Ranking MAGAN: Margin Adaptation for Generative Adversarial Networks MIX+GAN (Generalization and Equilibrium in Generative Adversarial Nets (GANs) )

2 Improving GAN Quick Review of GAN Unified framework of GAN
Selecting a better divergence: W-GAN Evaluation Overview F-div Improved W-GAN W-GAN Evaluation

3 Review Human judgment

4 Basic Idea of GAN The data we want to generate has a distribution 𝑃 𝑑𝑎𝑡𝑎 𝑥 𝑃 𝑑𝑎𝑡𝑎 𝑥 High Probability Low Probability Image Space

5 Basic Idea of GAN A generator G is a network. The network defines a probability distribution. 𝑃 𝐺 𝑥 𝑃 𝑑𝑎𝑡𝑎 𝑥 Normal Distribution generator G 𝑧 𝑥=𝐺 𝑧 As close as possible It is difficult to compute 𝑃 𝐺 𝑥 We do not know what the distribution looks like.

6 Basic Idea of GAN Normal Distribution 𝑃 𝐺 𝑥 NN Generator v1 𝑃 𝑑𝑎𝑡𝑎 𝑥 1
𝑃 𝐺 𝑥 NN Generator v1 𝑃 𝑑𝑎𝑡𝑎 𝑥 1 1 1 1 Discri-minator v1 image It can be proofed that the loss the discriminator related to JS divergence. 1/0

7 Basic Idea of GAN Normal Distribution NN Generator Next step: v1
Updating the parameters of generator To minimize the JS divergence v2 The output be classified as “real” (as close to 1 as possible) Discri-minator v1 Generator + Discriminator = a network Using gradient descent to update the parameters in the generator, but fix the discriminator 1.0 0.13

8 Actually, we can evaluate any f-divergence.
Unified Framework Original GAN using binary classifier to evaluate JS divergence of two distributions Actually, we can evaluate any f-divergence. f-divergence Fenchel Conjugate Connect to GAN Sebastian Nowozin, Botond Cseke, Ryota Tomioka, “f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization”, NIPS, 2016

9 f-divergence 𝑃 and 𝑄 are two distributions. 𝑝 𝑥 and q 𝑥 are the probability of sampling 𝑥. 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 f is convex f(1) = 0 = 1 If 𝑝 𝑥 =𝑞 𝑥 for all 𝑥 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 =0 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 = 0 The constraint of f is for non-negative When P and Q are the same distribution, ≥𝑓 𝑥 𝑞 𝑥 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 Because f is convex 𝐷 𝑓 𝑃||𝑄 has the smallest value, which is 0 =𝑓 1 =0

10 f-divergence KL Reverse KL Chi Square 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥
𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 f is convex f(1) = 0 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑝 𝑥 𝑞 𝑥 𝑙𝑜𝑔 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 𝑓 𝑥 =𝑥𝑙𝑜𝑔𝑥 = 𝑥 𝑝 𝑥 𝑙𝑜𝑔 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 KL 𝑓 𝑥 =−𝑙𝑜𝑔𝑥 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 −𝑙𝑜𝑔 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 = 𝑥 𝑞 𝑥 𝑙𝑜𝑔 𝑞 𝑥 𝑝 𝑥 𝑑𝑥 Reverse KL The constraint of f is for non-negative 𝑓 𝑥 = 𝑥−1 2 Chi Square 𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑝 𝑥 𝑞 𝑥 −1 2 𝑑𝑥 = 𝑥 𝑝 𝑥 −𝑞 𝑥 2 𝑞 𝑥 𝑑𝑥

11 Fenchel Conjugate Every convex function f has a conjugate function f*
𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 Fenchel Conjugate f is convex, f(1) = 0 Every convex function f has a conjugate function f* 𝑓 ∗ 𝑡 = max 𝑥∈𝑑𝑜𝑚 𝑓 𝑥𝑡−𝑓 𝑥 x is given, t is variable 𝑥 1 𝑡−𝑓 𝑥 1 𝑓 ∗ 𝑡 1 𝑓 ∗ 𝑡 2 supernum 最小上界 Proof at here: 𝑥 2 𝑡−𝑓 𝑥 2 𝑥 3 𝑡−𝑓 𝑥 3 𝑡 1 𝑡 2 𝑡

12 0.1t – 0.1log0.1 Fenchel Conjugate 1t – 0 10t – 10 log 10 Every convex function f has a conjugate f*, and (f*)* = f 𝑓 ∗ 𝑡 = sup 𝑥∈𝑑𝑜𝑚 𝑓 𝑥𝑡−𝑓 𝑥 𝑥 1 𝑡−𝑓 𝑥 1 𝑓 𝑥 =𝑥𝑙𝑜𝑔𝑥 0.1t - 0.1log0.1 1t – 0 10t – 10 log 10 𝑥 2 𝑡−𝑓 𝑥 2 Something like exponential? 𝑓 ∗ 𝑡 =𝑒𝑥𝑝 𝑡−1 𝑥 3 𝑡−𝑓 𝑥 3

13 Fenchel Conjugate Every convex function f has a conjugate f*, and (f*)* = f 𝑓 ∗ 𝑡 = sup 𝑥∈𝑑𝑜𝑚 𝑓 𝑥𝑡−𝑓 𝑥 𝑓 𝑥 =𝑥𝑙𝑜𝑔𝑥 𝑓 ∗ 𝑡 =𝑒𝑥𝑝 𝑡−1 𝑓 ∗ 𝑡 = sup 𝑥∈𝑑𝑜𝑚 𝑓 𝑥𝑡−𝑥𝑙𝑜𝑔𝑥 𝑔 𝑥 =𝑥𝑡−𝑥𝑙𝑜𝑔𝑥 Given t, find x maximizing 𝑔 𝑥 𝑡−𝑙𝑜𝑔𝑥−1=0 𝑥=𝑒𝑥𝑝 𝑡−1 𝑓 ∗ 𝑡 =𝑒𝑥𝑝 𝑡−1 ×𝑡−𝑒𝑥𝑝 𝑡−1 × 𝑡−1

14 Connection with GAN 𝑓 ∗ 𝑡 = sup 𝑥∈𝑑𝑜𝑚 𝑓 𝑥𝑡−𝑓 𝑥
𝐷 𝑓 𝑃||𝑄 = 𝑥 𝑞 𝑥 𝑓 𝑝 𝑥 𝑞 𝑥 𝑑𝑥 𝑝 𝑥 𝑞 𝑥 𝑝 𝑥 𝑞 𝑥 = 𝑥 𝑞 𝑥 sup 𝑡∈𝑑𝑜𝑚 𝑓 ∗ 𝑝 𝑥 𝑞 𝑥 𝑡− 𝑓 ∗ 𝑡 𝑑𝑥 ≥ 𝑥 𝑞 𝑥 max 𝐷 𝑝 𝑥 𝑞 𝑥 𝐷 𝑥 − 𝑓 ∗ 𝐷 𝑥 𝑑𝑥 D is a function whose input is x, and output is t 最小上界 Imagine that there is a function T 𝑥 whose output is t = max D 𝑥 𝑞 𝑥 𝑝 𝑥 𝑞 𝑥 𝐷 𝑥 − 𝑓 ∗ 𝐷 𝑥 𝑑𝑥 = max D 𝑥 𝑝 𝑥 𝐷 𝑥 𝑑𝑥+ 𝑥 𝑞 𝑥 𝑓 ∗ 𝐷 𝑥 𝑑𝑥

15 Connection with GAN 𝐺 ∗ =𝑎𝑟𝑔 min 𝐺 max 𝐷 𝑉 𝐺,𝐷
≈ max D 𝑥 𝑝 𝑥 𝐷 𝑥 𝑑𝑥+ 𝑥 𝑞 𝑥 𝑓 ∗ 𝐷 𝑥 𝑑𝑥 𝐷 𝑓 𝑃||𝑄 = max D 𝐸 𝑥~𝑃 𝐷 𝑥 + 𝐸 𝑥~𝑄 𝑓 ∗ 𝐷 𝑥 Samples from P Samples from Q 𝐷 𝑓 𝑃 𝑑𝑎𝑡𝑎 || 𝑃 𝐺 = max D 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 + 𝐸 𝑥~ 𝑃 𝐺 𝑓 ∗ 𝐷 𝑥 Now you can minimize any f-divergence 𝐺 ∗ =𝑎𝑟𝑔 min 𝐺 max 𝐷 𝑉 𝐺,𝐷 Original GAN minimizes something related to JS

16 Using the f-divergence you like 
𝐷 𝑓 𝑃 𝑑𝑎𝑡𝑎 || 𝑃 𝐺 = max D 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 + 𝐸 𝑥~ 𝑃 𝐺 𝑓 ∗ 𝐷 𝑥 Using the f-divergence you like 

17 Double-loop v.s. Single-step
𝐺 ∗ =𝑎𝑟𝑔 min 𝐺 max 𝐷 𝑉 𝐺,𝐷 Original paper of GAN: double-loop algorithm In each iteration Given a generator 𝐺 𝑡 , 𝐷 𝑡 =𝑎𝑟𝑔 max 𝐷 𝑉 𝐺 𝑡 ,𝐷 Update the parameters 𝜃 𝐷 many times to find 𝐷 𝑡 Update generator once: 𝜃 𝐺 𝑡+1 ← 𝜃 𝐺 𝑡 +𝜂 𝛻 𝜃 𝐺 𝑉 𝜃 𝐺 𝑡 , 𝜃 𝐷 𝑡 Paper of f-GAN: Single-step algorithm 𝜃 𝐷 𝑡+1 ← 𝜃 𝐷 𝑡 −𝜂 𝛻 𝜃 𝐷 𝑉 𝜃 𝐺 𝑡 , 𝜃 𝐷 𝑡 Inner Loop One back propogation It will converge Outer Loop One Backpropogation

18 Experimental Results Approximate a mixture of Gaussians

19 Using Earth Mover’s Distance (Wasserstein Distance)
W-GAN Using Earth Mover’s Distance (Wasserstein Distance) For quick start: Read-through: Wasserstein GAN: 令人拍案叫絕的Wasserstein GAN: Wasserstein GAN and the Kantorovich-Rubinstein Duality: Russian-American mathematician 一月 GAN Martin Arjovsky, Soumith Chintala, Léon Bottou, Wasserstein GAN, arXiv prepring, 2017

20 Earth Mover’s Distance
Considering one distribution P as a pile of earth, and another distribution Q as the target The average distance the earth mover has to move the earth? 𝑃 𝑄 d 𝑊 𝑃,𝑄 =𝑑

21 Earth Mover’s Distance
𝑄 𝑃 If there are many possible “moving plan”, using the one with the smallest average distance to define the earth mover’s distance. Source of image:

22 A “moving plan” is a matrix
The value of the element is the amount of earth from one position to another. Average distance of a plan 𝛾: 𝐵 𝛾 = 𝑥 𝑔 , 𝑥 𝑑 𝛾 𝑥 𝑔 , 𝑥 𝑑 𝑥 𝑔 − 𝑥 𝑑 Earth Mover’s Distance: 𝑊 𝑃 𝐺 , 𝑃 𝑑𝑎𝑡𝑎 = min 𝛾∈Π 𝐵 𝛾 moving plan 𝛾 The best plan All possible plan Π

23 Earth Mover’s Distance
𝑑 50 𝑑 0 𝑃 𝐺 0 𝑃 𝑑𝑎𝑡𝑎 𝑃 𝐺 50 𝑃 𝑑𝑎𝑡𝑎 𝑃 𝐺 100 …… …… 𝑃 𝑑𝑎𝑡𝑎 𝐽𝑆 𝑃 𝐺 0 , 𝑃 𝑑𝑎𝑡𝑎 =𝑙𝑜𝑔2 𝐽𝑆 𝑃 𝐺 100 , 𝑃 𝑑𝑎𝑡𝑎 =0 𝐽𝑆 𝑃 𝐺 50 , 𝑃 𝑑𝑎𝑡𝑎 =𝑙𝑜𝑔2 𝑊 𝑃 𝐺 0 , 𝑃 𝑑𝑎𝑡𝑎 = 𝑑 0 𝑊 𝑃 𝐺 100 , 𝑃 𝑑𝑎𝑡𝑎 =0 𝑊 𝑃 𝐺 50 , 𝑃 𝑑𝑎𝑡𝑎 = 𝑑 50

24 Back to the GAN framework
𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 = max 𝐷∈1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 Lipschitz Function 𝑓 𝑥 1 −𝑓 𝑥 2 ≤𝐾 𝑥 1 − 𝑥 2 K=1 for "1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧" 利普希茨連 𝐷 𝑓 𝑃 𝑑𝑎𝑡𝑎 || 𝑃 𝐺 = max 𝐷 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝑓 ∗ 𝐷 𝑥

25 Back to the GAN framework
𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 𝑘+𝑑 𝑘 = max 𝐷∈1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 Blue: D(x) for original GAN 𝐷 𝑥 1 =+∞ 𝐷 𝑥 1 =−∞ Green: D(x) for WGAN 𝑘+𝑑 𝑘 𝑃 𝑑𝑎𝑡𝑎 𝑃 𝐺 d 𝑥 1 𝑥 2 WGAN will provide gradient to push PG towards Pdata 𝑓 𝑥 1 −𝑓 𝑥 2 ≤ 𝑥 1 − 𝑥 2

26 Back to the GAN framework
𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 = max 𝐷∈1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 K How to use gradient descent to optimize this equation? Clip all the weights w between c and -c No clip clipping After parameter update, if w > c, then w=c; if w<-c, then w=-c We only ensure that 𝑓 𝑥 1 −𝑓 𝑥 2 ≤𝐾 𝑥 1 − 𝑥 2 For some K

27 Using RMSProp instead of Adam
Algorithm of WGAN Initialize 𝜃 𝑑 for D and 𝜃 𝑔 for G Using RMSProp instead of Adam In each training iteration: Sample m examples 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑚 from data distribution 𝑃 𝑑𝑎𝑡𝑎 𝑥 Sample m noise samples 𝑧 1 , 𝑧 2 ,…, 𝑧 𝑚 from the prior 𝑃 𝑝𝑟𝑖𝑜𝑟 𝑧 Obtaining generated data 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑚 , 𝑥 𝑖 =𝐺 𝑧 𝑖 Update discriminator parameters 𝜃 𝑑 to maximize 𝑊 = 1 𝑚 𝑖=1 𝑚 𝐷 𝑥 𝑖 − 1 𝑚 𝑖=1 𝑚 𝐷 𝑥 𝑖 𝜃 𝑑 ← 𝜃 𝑑 +𝜂𝛻 𝑊 𝜃 𝑑 Sample another m noise samples 𝑧 1 , 𝑧 2 ,…, 𝑧 𝑚 from the prior 𝑃 𝑝𝑟𝑖𝑜𝑟 𝑧 Update generator parameters 𝜃 𝑔 to minimize 𝜃 𝑔 ← 𝜃 𝑔 −𝜂𝛻 𝑊 𝜃 𝑔 Learning D Repeat k times No sigmoid and log Weight clipping 位什麼要 1- 啊 Learning G Only Once

28 https://arxiv.org/abs/1701.07875

29 DCGAN generator (no batch normalization, bad structure):
DCGAN generator: W-GAN GAN DCGAN generator (no batch normalization, bad structure): W-GAN GAN MLP generator: W-GAN GAN

30 Improved W-GAN [v1] Fri, 31 Mar :25:00 GMT (5045kb,D) Russian-American mathematician GAN Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin, Aaron Courville, “Improved Training of Wasserstein GANs”, arXiv prepring, 2017

31 Improved WGAN 𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺
𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 = max 𝐷∈1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 = max 𝐷 { 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 Gradient Penalty − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝛻 𝑥 𝐷 𝑥 −1 2 } 𝑃 𝑑𝑎𝑡𝑎 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝑃 𝐺

32 Improved WGAN https://arxiv.org/abs/1704.00028
通過小數據集上的實驗,概述了判別器中的權重剪枝是如何導致影響穩定性和性能的病態行為的。 提出具有梯度懲罰的WGAN(WGAN with gradient penalty),從而避免同樣的問題。 展示該方法相比標準WGAN擁有更快的收斂速度,並能生成更高質量的樣本。 展示該方法如何提供穩定的GAN訓練:幾乎不需要超參數調參,成功訓練多種針對圖片生成和語言模型的GAN架構

33 Improved WGAN 𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺
𝑓 𝑥 1 −𝑓 𝑥 2 ≤ 𝑥 1 − 𝑥 2 Improved WGAN The norm of the gradients smaller than 1 1-lipschitz function 𝑊 𝑃 𝑑𝑎𝑡𝑎 , 𝑃 𝐺 = max 𝐷∈1−𝐿𝑖𝑝𝑠𝑐ℎ𝑖𝑡𝑧 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 ≈ max 𝐷 { 𝐸 𝑥~ 𝑃 𝑑𝑎𝑡𝑎 𝐷 𝑥 − 𝐸 𝑥~ 𝑃 𝐺 𝐷 𝑥 −𝜆 𝑥 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 𝑑𝑥 } Force the norm of the gradient smaller than 1 − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 } Force the norm of the gradient close to 1 − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝛻 𝑥 𝐷 𝑥 −1 2 }

34 Improved WGAN −𝜆 𝑥 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 𝑑𝑥 }
−𝜆 𝑥 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 𝑑𝑥 } − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 } 𝑃 𝑑𝑎𝑡𝑎 𝑃 𝐺 Only give gradient constraint to the area between 𝑃 𝑑𝑎𝑡𝑎 and 𝑃 𝐺 Because they influence how 𝑃 𝐺 moves to 𝑃 𝑑𝑎𝑡𝑎 “Given that enforcing the Lipschitz constraint everywhere is intractable, enforcing it only along these straight lines seems sufficient and experimentally results in good performance.”

35 Improved WGAN − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 }
− 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝑚𝑎𝑥 0, 𝛻 𝑥 𝐷 𝑥 −1 } − 𝜆𝐸 𝑥~ 𝑃 𝑖𝑛𝑡𝑒𝑟 𝛻 𝑥 𝐷 𝑥 −1 2 } 𝑃 𝑑𝑎𝑡𝑎 𝑃 𝐺 Largest gradient between 𝑃 𝑑𝑎𝑡𝑎 and 𝑃 𝐺 One may wonder why we penalize the norm of the gradient for differing from 1, instead of just penalizing large gradients. The reason is that the optimal critic … actually has gradients with norm 1 almost everywhere under Pr and Pg (check the proof in the appendix) Simply penalizing overly large gradients also works in theory, but experimentally we found that this approach converged faster and to better optima.

36 Only modifying the discriminator part
Algorithm of Improved WGAN Only modifying the discriminator part In each training iteration: Sample m examples 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑚 from data distribution 𝑃 𝑑𝑎𝑡𝑎 𝑥 Sample m noise samples 𝑧 1 , 𝑧 2 ,…, 𝑧 𝑚 from the prior 𝑃 𝑝𝑟𝑖𝑜𝑟 𝑧 Obtaining generated data 𝑥 1 , 𝑥 2 ,…, 𝑥 𝑚 , 𝑥 𝑖 =𝐺 𝑧 𝑖 Sampling 𝜀 1 , 𝜀 2 ,…, 𝜀 𝑚 from U[0-1] Update discriminator parameters 𝜃 𝑑 to maximize 𝑊 ′ = 1 𝑚 𝑖=1 𝑚 𝐷 𝑥 𝑖 − 1 𝑚 𝑖=1 𝑚 𝐷 𝑥 𝑖 𝜃 𝑑 ← 𝜃 𝑑 +𝜂𝛻𝑊′ 𝜃 𝑑 Learning D Repeat k times 𝑥 𝑖 ⟵ 𝜀 𝑖 𝑥 𝑖 + 1− 𝜀 𝑖 𝑥 𝑖 位什麼要 1- 啊 +𝜆 𝑖=1 𝑚 𝛻 𝑥 𝐷 𝑥 −1 2 Back to Adam

37 DCGAN LSGAN Original WGAN Improved WGAN G: CNN, D: CNN G: CNN (no normalization), D: CNN (no normalization) G: CNN (tanh), D: CNN(tanh)

38 DCGAN LSGAN Original WGAN Improved WGAN G: MLP, D: CNN G: CNN (bad structure), D: CNN G: 101 layer, D: 101 layer

39 Consider this matrix as an “image”
Sentence Generation good bye. g o o d <space> …… 1 1 1 1 1 d 1-of-N Encoding g o <space> Consider this matrix as an “image”

40 Sentence Generation Real sentence Generated
1 1 1 1 1 A binary classifier can immediately find the difference. JS divergence always 0 0.9 0.1 0.1 0.9 0.1 0.7 0.1 0.8 0.1 0.9 Can never be 1-of-N WGAN is helpful

41 Improved WGAN successfully generating sentences

42 W-GAN – 唐詩鍊成 感謝 李仲翊 同學提供實驗結果 輸出 32 個字 (包含標點)
升雲白遲丹齋取,此酒新巷市入頭。黃道故海歸中後,不驚入得韻子門。 據口容章蕃翎翎,邦貸無遊隔將毬。外蕭曾臺遶出畧,此計推上呂天夢。 新來寳伎泉,手雪泓臺蓑。曾子花路魏,不謀散薦船。 功持牧度機邈爭,不躚官嬉牧涼散。不迎白旅今掩冬,盡蘸金祇可停。 玉十洪沄爭春風,溪子風佛挺橫鞋。盤盤稅焰先花齋,誰過飄鶴一丞幢。 海人依野庇,為阻例沉迴。座花不佐樹,弟闌十名儂。 入維當興日世瀕,不評皺。頭醉空其杯,駸園凋送頭。 鉢笙動春枝,寶叅潔長知。官爲宻爛去,絆粒薛一靜。 吾涼腕不楚,縱先待旅知。楚人縱酒待,一蔓飄聖猜。 折幕故癘應韻子,徑頭霜瓊老徑徑。尚錯春鏘熊悽梅,去吹依能九將香。 通可矯目鷃須浄,丹迤挈花一抵嫖。外子當目中前醒,迎日幽筆鈎弧前。 庭愛四樹人庭好,無衣服仍繡秋州。更怯風流欲鴂雲,帛陽舊據畆婷儻。 CNN 32

43 More about Discrete Output
SeqGAN Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient, AAAI, 2017 Jiwei Li, Will Monroe, Tianlin Shi, Sébastien Jean, Alan Ritter, Dan Jurafsky, Adversarial Learning for Neural Dialogue Generation, arXiv preprint, 2017 Boundary seeking GAN R Devon Hjelm, Athul Paul Jacob, Tong Che, Kyunghyun Cho, Yoshua Bengio, “Boundary-Seeking Generative Adversarial Networks”, arXiv preprint, 2017 Gumbel-Softmax Matt J. Kusner, José Miguel Hernández-Lobato, GANS for Sequences of Discrete Elements with the Gumbel-softmax Distribution, arXiv preprint, 2016 Tong Che, Yanran Li, Ruixiang Zhang, R Devon Hjelm, Wenjie Li, Yangqiu Song, Yoshua Bengio, Maximum-Likelihood Augmented Discrete Generative Adversarial Networks, arXiv preprint, 2017

44 Conditional GAN Human judgment

45 Conditional GAN c1: a dog is running 𝑥 1 : c2: a bird is flying 𝑥 2 :
𝑥 1 : Conditional GAN c1: a dog is running 𝑥 2 : c2: a bird is flying Text to image by traditional supervised learning Image NN 𝑥 1 : c1: a dog is running as close as possible Text: “train” Target of NN output A blurry image!

46 Conditional GAN x = G(c,z) c: train G Prior distribution 𝑧
It is a distribution. Approximate the distribution below Text: “train” Target of NN output A blurry image!

47 Conditional GAN x = G(c,z) scalar scalar G 𝑧 Prior distribution
c: train Image x is realistic or not + c and x are matched or not x is realistic or not D (v1) scalar 𝑥 D (v2) scalar 𝑐 𝑥 May generate x not related to c Positive example: (train , ) Negative example: (train , ) (cat , )

48 Image-to-image Translation
𝑧 x = G(c,z) 𝑐 Façade門面,正面,外觀,虛設的外表 That is the facade of the Palace. 那是宮殿的正面

49 Image-to-image Translation
Experimental results Image NN as close as possible Testing: It is blurry because it is the average of several images. input close

50 Image-to-image Experimental results Image G D 1/0 𝑧 Testing: input
close GAN GAN + close

51 Unpaired Transformation - Cycle GAN, Disco GAN
Transform an object from one domain to another without pair data Domain X Domain Y photo van Gogh Disco GAN:

52 Cycle GAN 𝐺 𝑋→𝑌 𝐷 𝑌 Domain X Domain Y Become similar to domain Y
Become similar to domain Y Domain X 𝐺 𝑋→𝑌 Not what we want ignore input 𝐷 𝑌 1/0 Berkeley AI Research (BAIR) laboratory, UC Berkeley Input image belongs to domain Y or not Domain Y

53 Cycle GAN 𝐺 𝑋→𝑌 𝐺 Y→X 𝐷 𝑌 Domain X Domain Y as close as possible
Lack of information for reconstruction 𝐷 𝑌 1/0 Input image belongs to domain Y or not Domain Y

54 Cycle GAN 𝐺 𝑋→𝑌 𝐺 Y→X 𝐷 𝑌 𝐷 𝑋 𝐺 Y→X 𝐺 𝑋→𝑌 Domain X Domain Y
as close as possible 𝐺 𝑋→𝑌 𝐺 Y→X 𝐷 𝑌 belongs to domain X or not belongs to domain Y or not 𝐷 𝑋 1/0 1/0 𝐺 Y→X 𝐺 𝑋→𝑌 as close as possible

55 Unpaired Transformation – 真人動畫化
king/items/8d36d9029ad1203aac55 把真人頭像變成動漫人物

56 Unpaired Transformation – 真人動畫化
Using the code: king/kawaii_creator It is not cycle GAN, Disco GAN 很有用 線上課程

57 Evaluation Human judgment Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, Roland Memisevic, “Generating images with recurrent adversarial networks”, arXiv preprint, 2016 Tong Che, Yanran Li, Athul Paul Jacob, Yoshua Bengio, Wenjie Li, “Mode Regularized Generative Adversarial Networks”, ICLR, 2017 Lucas Theis, Aäron van den Oord, Matthias Bethge, “A note on the evaluation of generative models”, arXiv preprint, 2015

58 Likelihood Generator PG 𝐿= 1 𝑁 𝑖 𝑙𝑜𝑔 𝑃 𝐺 𝑥 𝑖 Log Likelihood:
: real data (not observed during training) : generated data Prior Distribution Generator PG 𝑥 𝑖 𝐿= 1 𝑁 𝑖 𝑙𝑜𝑔 𝑃 𝐺 𝑥 𝑖 Log Likelihood: We cannot compute 𝑃 𝐺 𝑥 𝑖 . We can only sample from 𝑃 𝐺 .

59 Likelihood - Kernel Density Estimation
Likelihood - Kernel Density Estimation Estimate the distribution of PG(x) from sampling Generator PG Each sample is the mean of a Gaussian with the same covariance  A common way is the use the bandwidth that minimises the optimality criterion Now we have an approximation of 𝑃 𝐺 , so we can compute 𝑃 𝐺 𝑥 𝑖 for each real data 𝑥 𝑖 Then we can compute the likelihood.

60 Likelihood - Kernel Density Estimation
Likelihood - Kernel Density Estimation How many samples? Weird results? Model Likelihood DBN 138 GAN 225 True Distribution 243 K-means 313 ?

61 Likelihood v.s. Quality Low likelihood, high quality? Generator
High likelihood, low quality? Considering a model generating images from training set …… Generator PG 𝑃 𝐺 𝑥 𝑖 =0 Generated data Real data 𝑥 𝑖 X 0.99 X 0.01 G1 G2 𝐿= 1 𝑁 𝑖 𝑙𝑜𝑔 𝑃 𝐺 𝑥 𝑖 =−𝑙𝑜𝑔 𝑁 𝑖 𝑙𝑜𝑔 𝑃 𝐺 𝑥 𝑖 4.6 100

62 Evaluate by Other Networks
Well-trained CNN 𝑥 𝑃 𝑦|𝑥 Lower entropy means higher visual quality CNN 𝑥 1 𝑃 𝑦 1 | 𝑥 1 1 𝑁 𝑛 𝑃 𝑦 𝑛 | 𝑥 𝑛 CNN 𝑥 2 𝑃 𝑦 2 | 𝑥 2 CNN 𝑥 3 High entropy means high variety 𝑃 𝑦 3 | 𝑥 3 ……

63 Evaluate by Other Networks - Inception Score

64 Evaluate by Other Networks - Inception Score
Improved W-GAN

65 K-Nearest Neighbor Using k-nearest neighbor to check whether the generator generates new objects reverse

66 Missing Mode Discriminator always knows they are real with high confidence Missing anything?

67 Energy-based GAN Another point of view Human judgment

68 Original GAN Role of discriminator: lead the generator Discriminator
Data (target) distribution Generated distribution Role of discriminator: lead the generator 𝐷 𝑥 𝐷 𝑥 𝐷 𝑥 𝐷 𝑥 When the data distribution and generated distribution is the same The discriminator will be useless (flat).

69 Original GAN The discriminator is flat in the end.
Source: (credit: Benjamin Striner)

70 Energy-based Model Evaluation Function
We want to find an evaluation function F(x) Input: object x (e.g. images), output: scalar (how good x is) Real x has high F(x) F(x) can be a network We can find good x by F(x): Generate x with large F(x) How to find F(x)? Evaluation Function 𝑥 𝑠𝑐𝑎𝑙𝑎𝑟 F(x) real data

71 Energy-based GAN In the end …… 𝐹 𝑥
We want to find an evaluation function F(x) How to find F(x)? 𝐹 𝑥 In the end …… 𝐹 𝑥 𝐹 𝑥

72 Energy-based GAN Preview: Framework of structured learning (Energy-based Model) ML Lecture 21: Structured Learning - Introduction ML Lecture 22: Structured Learning - Linear Model ML Lecture 23: Structured Learning - Structured SVM ML Lecture 24: Structured Learning - Sequence Labeling


Download ppt "Improving Generative Adversarial Network"

Similar presentations


Ads by Google