An Asymptotic Analysis of Generative, Discriminative, and Pseudolikelihood Estimators by Percy Liang and Michael Jordan (ICML 2008 ) Presented by Lihan He ECE, Duke University June 27, 2008
Introduction Exponential family estimators Generative Fully discriminative Pseudolikelihood discriminative Asymptotic analysis Experiments Conclusions Outline
Introduction Data points are not considered to be drawn independently. There are correlations between data points. Given data, we have to consider the joint distribution over all the data points. Correspondingly, the overall likelihood is not the product of the likelihood for each data point.
Introduction Generative vs. Discriminative Generative model: A model for randomly generating observed data; Learning a joint probability distribution over both observations and labels Discriminative model: A model only of the label variables conditional on the observed data; Learning a conditional distribution over labels given observations
Introduction Full Likelihood vs. Pseudolikelihood Full likelihood: Pseudolikelihood: An approximation of the full likelihood; Computationally more efficient. Could be intractable; Computationally inefficient. A set of dependencies between data points
Estimators Exponential Family Estimators for features model parameters normalization Example: conditional random field
Estimators Composite Likelihood Estimators [Lindsay 1988] One class of pseudolikelihood estimator; Consists of a weighted sum of component likelihoods, each of which is the probability of one subset of data points conditioned on another. Partitions the output space (denoted by r) according to a fixed distribution P r, and obtains the component likelihood. Defines criterion function which reflects the quality of the estimator. The maximum composite likelihood estimator
Estimators Three estimators to be compared in the paper: Generative: one component Fully discriminative: one component Pseudolikelihood discriminative: for each data point, we have one component
Estimators Risk Decomposition Bayes risk have only finite dataintrinsic suboptimality of the estimator Define unrelated to data samples z
Asymptotic Analysis before Well-specified model:, achieves O(n -1 ) convergence rate. Misspecified model:only fully discriminative estimator achieves O(n -1 ) rate.
Asymptotic Analysis
Experiments Toy example:four-node binary-valued graphical model True model: Learned model: When, the learned model is well-specified; When, the learned model is misspecified.
Experiments well-specified misspecified
Experiments Part-of-speech (POS) Tagging: Input: a sequence of words Output: a sequence of POS tags, i.e. noun, verb,etc. (45 tags total) Specified model: Node features : indicator functions of the form Edge features : indicator functions of the form Training: Wall Street Journal, 38K sentences. Testing: Wall Street Journal, 5.5K sentences, different sections from training.
Experiments Use the learned generative model to sample 1000 training samples and 1000 test samples, as synthetic data.
Conclusions When model is well-specified: Three estimators all achieve O(n -1 ) convergence rate; There are no approximation error; The asymptotic estimation error generative < fully discriminative < pseudolikelihood discriminative When model is misspecified: Fully discriminative estimator still achieves O(n -1 ) convergence rate, but the other two estimators achieve O(n -1/2 ) convergence rate ; The approximation error and asymptotic estimation error for fully discriminative estimator is lower than the generative estimator and the pseudolikelihood discriminative estimator.