Download presentation
Presentation is loading. Please wait.
Published bySamantha Berry Modified over 6 years ago
1
Adversarial Learning for Neural Dialogue Generation
Zhang Yan Li, Jiwei, et al. "Adversarial Learning for Neural Dialogue Generation." arXiv preprint arXiv: (2017).
2
Goal “to train to produce sequences that are indistinguishable from human-generated dialogue utterances.”
3
Main Contribution Propose to use an adversarial training approach for response generation and cast the model in the framework of reinforcement learning.
4
Adversarial Reinforcement Model
5
Adversarial Training MinMax Game between Generator vs Discriminator
6
Model Breakdown The model has two main parts, G and D:
Generative Model (G) Generates a response y given dialogue history x. Standard Seq2Seq model with Attention Mechanism Discriminative Model (D) Binary Classifier that takes as input a sequence of dialogue utterances {x, y} and outputs label indicating whether the input is generated by human or machines Hierarchical Encoder + 2 class softmax function -> returns probability of the input dialogue episode being a machine or human generated dialogues.
7
Model Breakdown The model has two main parts, G and D:
Generative Model (G) Generates a response y given dialogue history x consisting of a sequence of dialogue utterances Standard Seq2Seq model with Attention Mechanism Discriminative Model (D) Binary Classifier that takes as input a sequence of dialogue utterances {x, y} and outputs label indicating whether the input is generated by human or machines Hierarchical Encoder + 2 class softmax function -> returns probability of the input dialogue episode being a machine or human generated dialogues.
8
Seq2Seq Models for Response Generation
(Sutskever et al., 2014; Jean et al., 2014) Source : Input Messages Target : Responses
9
Seq2Seq Models with Attention Mechanism
[Luong et al., 2015] Attention Mechanism predicts the output y with a weighted average context vector c, not just the last state
10
Model Breakdown The model has two main parts, G and D:
Generative Model (G) Generates a response y given dialogue history x consisting of a sequence of dialogue utterances Standard Seq2Seq model with Attention Mechanism Discriminative Model (D) Binary Classifier that takes as input a sequence of dialogue utterances {x, y} and outputs label indicating whether the input is generated by human or machines Hierarchical Encoder + 2 class softmax function -> returns probability of the input dialogue episode being a machine or human generated dialogues.
11
Model Breakdown The model has two main parts, G and D:
Generative Model (G) Generates a response y given dialogue history x consisting of a sequence of dialogue utterances Standard Seq2Seq model with Attention Mechanism Discriminative Model (D) Binary Classifier that takes as input a sequence of dialogue utterances {x, y} and outputs label indicating whether the input is generated by human or machines Hierarchical Encoder + 2-class softmax function -> returns probability of the input dialogue episode being a machine or human generated dialogues.
12
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm.
13
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm.
14
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm. approximated by likelihood ratio
15
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm. approximated by likelihood ratio
16
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm. approximated by likelihood ratio
17
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm. baseline value to reduce the variance of the estimate while keeping it unbiased classification score approximated by likelihood ratio gradient in parameter space policy
18
Training Methods Policy Gradient Methods:
The score of current utterances being human-generated ones assigned by the discriminator is used as a reward for the generator, which is trained to maximize the expected reward of generated utterances using REINFORCE algorithm. scalar reward approximated by likelihood ratio policy updates by the direction of the reward in the parameter space
19
Training Methods (Cont’d)
Problem with REINFORCE: has disadvantage that the expectation of the reward is approximated by only one sample, and reward associated with the sample is used for all actions REINFORCE assigns the same negative reward to all tokens [I, don’t, know] by comparing them with I don’t know Proper credit assignment in training would give separate rewards, most likely a neutral token for token I, and negative reward to don’t and know. Authors of the paper calls it: Reward for Every Generation Step (REGS)
20
Training Methods (Cont’d)
Problem with REINFORCE: has disadvantage that the expectation of the reward is approximated by only one sample, and reward associated with the sample is used for all actions REINFORCE assigns the same negative reward to all tokens [I, don’t, know] by comparing them with I don’t know Proper credit assignment in training would give separate rewards, most likely a neutral token for token I, and negative reward to don’t and know. Authors of the paper calls it: Reward for Every Generation Step (REGS)
21
Training Methods (Cont’d)
Problem with REINFORCE: has disadvantage that the expectation of the reward is approximated by only one sample, and reward associated with the sample is used for all actions Vanilla REINFORCE model assigns the same negative reward to all tokens [I, don’t, know] by comparing them with I don’t know Proper credit assignment in training would give separate rewards, most likely a neutral token for token I, and negative reward to don’t and know. Authors of the paper calls it: Reward for Every Generation Step (REGS) Input : What’s your name human : I am John machine : I don’t know
22
Training Methods (Cont’d)
Problem with REINFORCE: has disadvantage that the expectation of the reward is approximated by only one sample, and reward associated with the sample is used for all actions Vanilla REINFORCE model assigns the same negative reward to all tokens [I, don’t, know] by comparing them with I don’t know Proper credit assignment in training would give separate rewards, most likely a neutral token for token I, and negative reward to don’t and know. Input : What’s your name human : I am John machine : I don’t know
23
Training Methods (Cont’d)
Problem with REINFORCE: has disadvantage that the expectation of the reward is approximated by only one sample, and reward associated with the sample is used for all actions Vanilla REINFORCE model assigns the same negative reward to all tokens [I, don’t, know] by comparing them with I don’t know Proper credit assignment in training would give separate rewards, most likely a neutral token for token I, and negative reward to don’t and know. Authors of the paper calls it: Reward for Every Generation Step (REGS) Input : What’s your name human : I am John machine : I don’t know
24
Reward for Every Generation Step (REGS)
We need rewards for intermediate steps. Two Strategies Introduced: Monte Carlo (MC) Search Training Discriminator For Rewarding Partially Decoded Sequences
25
Monte Carlo Search Given a partially decoded s, the model keeps sampling tokens from the distribution until the decoding finishes Repeats N times (N generated sequences will share a common prefix s). These N sequences are fed to the discriminator, the average score of which is used as a reward. To set up the synthetic data experiments, we first initialize the parameters of an LSTM network following the normal distribution N (0, 1) as the oracle describing the real data distribution Goracle(xt|x1, , xt−1). Then we use it to generate 10,000 sequences of length 20 as the training set S for the generative models. We use a randomly initialized LSTM as the true model, aka, the oracle, to generate the real data distribution p(xt|x1, , xt−1) for the following experiments. When optimizing discriminative models, supervised training is applied to minimize the cross entropy, which is widely used as the objective function for classification and prediction tasks: L(y, yˆ) = −y log ˆy − (1 − y) log(1 − yˆ), (35) where y is the ground truth label of the input sequence and yˆ is the predicted probability from the discriminative models.
26
Monte Carlo Search time-consuming !
Given a partially decoded s, the model keeps sampling tokens from the distribution until the decoding finishes Repeats N times (N generated sequences will share a common prefix s). These N sequences are fed to the discriminator, the average score of which is used as a reward. time-consuming !
27
Rewarding Partially Decoded Sequences
Directly train a discriminator that is able to assign rewards to both fully and partially decoded sequences Break generated sequences into partial sequences Problem: Earlier actions in a sequence are shared among multiple training examples for discriminator. Result in overfitting The author proposes a similar strategy used in AlphaGo to mitigate the problem.
28
Rewarding Partially Decoded Sequences
For each collection of subsequences of Y, randomly sample only one example from positive examples and one example from negative examples, which are used to update discriminator. Time effective but less accurate than MC model.
29
Rewarding Partially Decoded Sequences
For each collection of subsequences of Y, randomly sample only one example from positive examples and one example from negative examples, which are used to update discriminator. Time effective but less accurate than MC model.
30
Rewarding Partially Decoded Sequences
classification score baseline value policy gradient in parameter space classification score baseline value policy gradient in parameter space
31
Teacher Forcing Generative model is still unstable, because:
generative model can only be indirectly exposed to the gold-standard target sequences through the reward passed back from the discriminator. This reward is used to promote or discourage the generator’s own generated sequences.
32
Teacher Forcing Generative model is still unstable, because:
generative model can only be indirectly exposed to the gold-standard target sequences through the reward passed back from the discriminator. This reward is used to promote or discourage the generator’s own generated sequences. This is fragile, because: Once a generator accidentally deteriorates in some training batches And Discriminator consequently does an extremely good job in recognizing sequences from the generator, the generator immediately gets lost It knows that the generated results are bad, but does not know what results are good.
33
Teacher Forcing The author proposes feeding human generated responses to the generator for model updates. discriminator automatically assigns a reward of 1 to the human responses and feed it to the generator to use this reward to update itself. Analogous to having a teacher intervene and force it to generate the true responses
34
Pseudocode for the Algorithm
35
Result
36
Result Adversarially-Trained system generates higher-quality responses than previous baselines!
37
Notes It did not show great performance on abstractive summarization task. Maybe because adversarial training strategy is more beneficial to: Tasks in which there is a big discrepancy between the distributions of the generated sequences and the reference target sequences Tasks in which input sequences do not bear all the information needed to generate the target in other words, there is no single correct target sequence in the semantic space.
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.