Minimum-Risk Training of Approximate CRF-Based NLP Systems Veselin Stoyanov and Jason Eisner 1.

Slides:



Advertisements
Similar presentations
Community Planning Training 1- Community Planning Training 7-1 Community Planning Training 7-1.
Advertisements

The best example I have:
Legrandgroup.com legrand.country bticino.country cablofil.country ortronics.country bticino.country legrand.country cablofil.country ortronics.country.
© 2014 ELLUCIAN. ALL RIGHTS RESERVED – CONFIDENTIAL & PROPRIETARY. CALBHR: What Ellucian Does For You Jan Wilder Sr. Business Analyst, Ellucian February.
From Blobs to Structured Data SEO in the Age of Entities Jonathon Colman, In-House SEO for REI INFO 498: Content Strategy.
Name of presentation Company name. Second Page Your Text here Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod.
Full Scale PowerPoint Poster Template: 36" by 48" Layout
Kapi‘olani Community College, Honolulu, Hawai‘i
VIDEO MARKETING Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam Lorem.
Company name Name of presentation. Slide master Your Text here Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod.
VIDEO MARKETING Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam Lorem.
Poster Title Author Names INTRODUCTION RESULTS CONCLUSION
Lorem ipsum dolor sit amet consectetuer adipiscing elit
PRESENTATION TITLE.
2001–2002 SponsorName Department Clinic Project Title
Presentation Title Here
Name of presentation Company name.
TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, Canada, V6T 2A3
POWERPOINT Best Solution !!
Multipurpose PowerPoint Template
½ Scale PowerPoint Poster Template: 42" by 68" Layout
AGENCY PowerPoint template.
Name of presentation Company name
Mathematics and Personal Resource Usage
2001–2002 SponsorName Department Clinic Project Title
Name of Presentation Click to add subtitle.
LOAD King Leonidas Anita Gudscore PERIOD 4.
LET’S MAKE THINGS SIMPLE AND COOL
LET’S MAKE THINGS SIMPLE AND COOL
TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, Canada, V6T 2A3
Multipurpose PowerPoint Template
Instructions for Designing Your Abstract ePoster
A Language-based Perspective on Web Application Security
6 AMAZING POWERPOINT TEMPLATE 2016 For More Powerpoint Visit
The Title for the Poster Goes Here and
The Title for the Poster Goes Here and
Name of presentation Company name.
LET’S MAKE THINGS SIMPLE AND COOL
½ Scale PowerPoint 2007 Poster Template: 42" by 68" Layout
More Free PowerPoint Templates at Freeppt.net
Weekly Activity Report
مرکز دانلود انیمیشن و پاورپوینت های سه بعدی و متحرک
مرکز دانلود انیمیشن و پاورپوینت های سه بعدی و متحرک
Title: Asdf Asdf Asdf Asdf Asdf Asdf Asdf
Full Scale PowerPoint Poster Template: 36" by 56" Layout
Multipurpose PowerPoint Template
Instructions for Designing Your Abstract ePoster
LOAD CHARACTER NAME PERIOD X.
Main Title And Second Line
Insert list of authors and organisations
Name of presentation Company name.
Name of presentation Company name.
Name of presentation Company name.
Headline goes here Date goes here.
Bill G Media Full Scale Poster Template©: 38" by 44" Portrait
Headline goes here Date goes here.
Full Scale PowerPoint Poster Template: 42" by 48" Layout
Headline goes here Date goes here.
Full Scale PowerPoint Poster Template: 42" by 56" Layout
Headline goes here Date goes here.
Instructions for Designing Your Abstract ePoster
Poster Title Author Names INTRODUCTION RESULTS CONCLUSION
COMPANY NAME Your Elevator Pitch John Doe, Founder
Name of Presentation Company Name. Slide Master Your Text here Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod.
HSE | Presentation Title
Name of presentation Company name 搜集整理.
Name of presentation Company name 51PPT模板网-
Main Findings in plain English. Emphasize important words.
Brief Descriptive Title Goes Here!
Main Findings in plain English. Emphasize important words.
Presentation transcript:

Minimum-Risk Training of Approximate CRF-Based NLP Systems Veselin Stoyanov and Jason Eisner 1

Overview We will show significant improvements on three data sets. How do we do it? – A new training algorithm! Don’t be afraid of discriminative models with approximate inference! Use our software instead! 2

Minimum-Risk Training of Approximate CRF-Based NLP Systems NLP Systems: 3 Lorem ipsum dolor sit amet, consectetuer adipiscing elit, sed diam nonummy nibh euismod tincidunt ut laoreet dolore magna aliquam erat volutpat. Ut wisi enim ad minim veniam, quis nostrud exerci tation ullamcorper suscipit lobortis nisl ut aliquip ex ea commodo consequat. Duis autem vel eum iriure dolor in hendrerit in vulputate velit esse molestie consequat, vel illum dolore eu feugiat nulla facilisis at vero eros et accumsan et iusto odio dignissim qui blandit praesent luptatum zzril delenit augue duis dolore te feugait nulla facilisi. Nam liber tempor cum soluta nobis eleifend option congue nihil imperdiet doming id quod mazim placerat facer possim assum. Typi non habent claritatem insitam; est usus legentis in iis qui facit eorum claritatem. Investigationes demonstraverunt lectores legere me lius quod ii legunt saepius. Claritas est etiam processus dynamicus, qui sequitur mutationem consuetudium lectorum. Mirum est notare quam littera gothica, quam nunc putamus… NLP System

Minimum-Risk Training of Approximate CRF-Based NLP Systems Conditional random fields (CRFs) [Lafferty et al., 2001] Discriminative models of probability p(Y|X). Used successfully for many NLP problems. 4

Minimum-Risk Training of Approximate CRF-Based NLP Systems Linear chain CRF: Exact inference is tractable. Training via maximum likelihood estimation is tractable and convex. x1x1 x1x1 x2x2 x2x2 x3x3 x3x3 x4x4 x4x4 Y1Y1 Y2Y2 Y3Y3 Y4Y4 5

Minimum-Risk Training of Approximate CRF-Based NLP Systems CRFs (like BNs and MRFs) are models of conditional probability. In NLP we are interested in making predictions. Build prediction systems around CRFs. 6

Minimum-Risk Training of Approximate CRF-Based NLP Systems Inference: compute quantities about the distribution. The cat sat on the mat.. DT.9 NN.05 … NN.8 JJ.1 … VBD.7 VB.1 … IN.9 NN.01 … DT.9 NN.05 … NN.4 JJ.3 …..99,.001 … 7

Minimum-Risk Training of Approximate CRF-Based NLP Systems Decoding: coming up with predictions based on the probabilities. The cat sat on the mat.. DTNNVBDINDTNN. 8

Minimum-Risk Training of Approximate CRF-Based NLP Systems General CRFs: Unrestricted model structure. Inference is intractable. Learning? 9 Y1Y1 Y2Y2 Y4Y4 Y3Y3 X1X1 X1X1 X2X2 X2X2 X3X3 X3X3

General CRFs Why sacrifice tractable inference and convex learning? Because a loopy model can represent the data better! Now you can train your loopy CRF using ERMA (Empirical Risk Minimization under Approximations)! 10

Minimum-Risk Training of Approximate CRF-Based NLP Systems In linear-chain CRFs, we can use Maximum Likelihood Estimation (MLE): – Compute gradients of the log likelihood running exact inference. – The likelihood is convex, so learning finds a global minimizer. 11

Minimum-Risk Training of Approximate CRF-Based NLP Systems We use CRFs with several approximations: – Approximate inference. – Approximate decoding. – Mis-specified model structure. – MAP training (vs. Bayesian). And we are still be maximizing data likelihood? Could be present in linear-chain CRFs as well. 12

Minimum-Risk Training of Approximate CRF-Based NLP Systems End-to-End Learning [Stoyanov, Ropson & Eisner, AISTATS2011] : – We should learn parameters that work well in the presence of approximations. – Match the training and test conditions. – Find the parameters that minimize training loss. 13

Minimum-Risk Training of Approximate CRF-Based NLP Systems Select ϴ that minimizes training loss. i.e., perform Empirical Risk Minimization under Approximations (ERMA). p(y|x) x (Appr.) Inference (Appr.) Decoding ŷ L(y*, ŷ ) Black box decision function parameterized by ϴ 14

Optimization Criteria 15 Approximation Aware NoYes Loss Aware No Yes

Optimization Criteria 16 Approximation Aware NoYes Loss Aware No Yes MLE

Optimization Criteria 17 Approximation Aware NoYes Loss Aware No Yes SVM struct [Finley and Joachims, 2008] M 3 N [Taskar et al., 2003] Softmax-margin [Gimpel & Smith, 2010] MLE

Optimization Criteria 18 Approximation Aware NoYes Loss Aware No Yes SVM struct [Finley and Joachims, 2008] M 3 N [Taskar et al., 2003] Softmax-margin [Gimpel & Smith, 2010] ERMA MLE

Minimum-Risk Training of Approximate CRF-Based NLP Systems through Back Propagation Use back propagation to compute gradients with respect to output loss Use a local optimizer to find the parameters that (locally) minimize training loss 19

Our Contributions Apply ERMA [Stoyanov, Ropson and Eisner; AISTATS2011] to three NLP problems. We show that: – General CRFs work better when they match dependencies in the data. – Minimum risk training results in more accurate models. – ERMA software package available at 20

The Rest of this Talk Experimental results A brief explanation of the ERMA algorithm 21

Experimental Evaluation 22

Implementation The ERMA software package( ) Includes syntax for describing general CRFs. Can optimize several commonly used loss functions: MSE, Accuracy, F-score. The package is generic: – Little effort to model new problems. – About1-3 days to express each problem in our formalism. 23

Specifics CRFs used with loopy BP for inference. – sum-product BP i.e., loopy forward-backward – max-product BP (annealed) i.e., loopy Viterbi Two loss functions: Accuracy and F1. 24

Modeling Congressional Votes The ConVote corpus [Thomas et al., 2006] First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… 25

Modeling Congressional Votes The ConVote corpus [Thomas et al., 2006] First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… Yea 26

Modeling Congressional Votes The ConVote corpus [Thomas et al., 2006] First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… Yea Had it not been for the heroic actions of the passengers of United flight 93 who forced the plane down over Pennsylvania, congress's ability to serve … Yea Mr. Sensenbrenner 27

Modeling Congressional Votes The ConVote corpus [Thomas et al., 2006] First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… Yea Had it not been for the heroic actions of the passengers of United flight 93 who forced the plane down over Pennsylvania, congress's ability to serve … Yea Mr. Sensenbrenner 28

Modeling Congressional Votes An example from the ConVote corpus [Thomas et al., 2006] Predict representative votes based on debates. 29

Modeling Congressional Votes An example from the ConVote corpus [Thomas et al., 2006] Predict representative votes based on debates. 30 Y/N

Modeling Congressional Votes An example from the ConVote corpus [Thomas et al., 2006] Predict representative votes based on debates. 31 First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… Y/N Text

Modeling Congressional Votes An example from the ConVote corpus [Thomas et al., 2006] Predict representative votes based on debates. 32 First, I want to commend the gentleman from Wisconsin (Mr. Sensenbrenner), the chairman of the committee on the judiciary, not just for the underlying bill… Y/N Text Y/N Con text Text

Modeling Congressional Votes 33 Accuracy Non-loopy baseline (2 SVMs + min-cut) 71.2

Modeling Congressional Votes 34 Accuracy Non-loopy baseline (2 SVMs + min-cut) 71.2 Loopy CRF models (inference via loopy sum-prod BP)

Modeling Congressional Votes 35 Accuracy Non-loopy baseline (2 SVMs + min-cut) 71.2 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 78.2

Modeling Congressional Votes 36 Accuracy Non-loopy baseline (2 SVMs + min-cut) 71.2 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 78.2 Softmax-margin (loss-aware) 79.0

Modeling Congressional Votes 37 Accuracy Non-loopy baseline (2 SVMs + min-cut) 71.2 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 78.2 Softmax-margin (loss-aware) 79.0 ERMA (loss- and approximation-aware) 84.5 *Boldfaced results are significantly better than all others (p < 0.05)

Information Extraction from Semi- Structured Text What: Special Seminar Who: Prof. Klaus Sutner Computer Science Department, Stevens Institute of Technology Topic: "Teaching Automata Theory by Computer" Date: 12-Nov-93 Time: 12:00 pm Place: WeH 4623 Host: Dana Scott (Asst: Rebecca Clark x8-6737) ABSTRACT: We will demonstrate the system "automata" that implements finite state machines… … After the lecture, Prof. Sutner will be glad to demonstrate and discuss the use of MathLink and his "automata" package CMU Seminar Announcement Corpus [Freitag, 2000] 38

start time location speaker Information Extraction from Semi- Structured Text What: Special Seminar Who: Prof. Klaus Sutner Computer Science Department, Stevens Institute of Technology Topic: "Teaching Automata Theory by Computer" Date: 12-Nov-93 Time: 12:00 pm Place: WeH 4623 Host: Dana Scott (Asst: Rebecca Clark x8-6737) ABSTRACT: We will demonstrate the system "automata" that implements finite state machines… … After the lecture, Prof. Sutner will be glad to demonstrate and discuss the use of MathLink and his "automata" package CMU Seminar Announcement Corpus [Freitag, 2000] 39

Skip-Chain CRF for Info Extraction Extract speaker, location, stime, and etime from seminar announcement s Sutner S Who: O Prof. S Klaus S will O Prof. S Sutner S … CMU Seminar Annoncement Corupus [Freitag, 2000] Skip-chain CRF [Sutton and McCallum, 2005; Finkel et al., 2005] 40

Semi-Structured Information Extraction 41 F1 Non-loopy baseline (linear-chain CRF) 86.2 Non-loopy baseline + ERMA (trained for loss instead of likelihood) 87.1

Semi-Structured Information Extraction 42 F1 Non-loopy baseline (linear-chain CRF) 86.2 Non-loopy baseline + ERMA (trained for loss instead of likelihood) 87.1 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 89.5

Semi-Structured Information Extraction 43 F1 Non-loopy baseline (Linear-chain CRF) 86.2 Non-loopy baseline + ERMA (trained for loss instead of likelihood) 87.1 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 89.5 Softmax-margin (loss-aware) 90.2

Semi-Structured Information Extraction 44 F1 Non-loopy baseline (Linear-chain CRF) 86.2 Non-loopy baseline + ERMA (trained for loss instead of likelihood) 87.1 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 89.5 Softmax-margin (loss-aware) 90.2 ERMA (loss- and approximation-aware) 90.9 *Boldfaced results are significantly better than all others (p < 0.05).

Collective Multi-Label Classification Reuters Corpus Version 2 [Lewis et al, 2004] The collapse of crude oil supplies from Libya has not only lifted petroleum prices, but added a big premium to oil delivered promptly. Before protests began in February against Muammer Gaddafi, the price of benchmark European crude for imminent delivery was $1 a barrel less than supplies to be delivered a year later. … Oil Libya Sports 45

Collective Multi-Label Classification Reuters Corpus Version 2 [Lewis et al, 2004] The collapse of crude oil supplies from Libya has not only lifted petroleum prices, but added a big premium to oil delivered promptly. Before protests began in February against Muammer Gaddafi, the price of benchmark European crude for imminent delivery was $1 a barrel less than supplies to be delivered a year later. … Oil Libya Sports 46

Collective Multi-Label Classification The collapse of crude oil supplies from Libya has not only lifted petroleum prices, but added a big premium to oil delivered promptly. Before protests began in February against Muammer Gaddafi, the price of benchmark European crude for imminent delivery was $1 a barrel less than supplies to be delivered a year later. … Oil Libya Sports 47

Collective Multi-Label Classification [Ghamrawi and McCallum, 2005; Finley and Joachims, 2008] The collapse of crude oil supplies from Libya has not only lifted petroleum prices, but added a big premium to oil delivered promptly. Before protests began in February against Muammer Gaddafi, the price of benchmark European crude for imminent delivery was $1 a barrel less than supplies to be delivered a year later. … Oil Libya Sports 48

Multi-Label Classification 49 F1 Non-loopy baseline (logistic regression for each label) 81.6

Multi-Label Classification 50 F1 Non-loopy baseline (independent max-ent models) 81.6 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 84.0

Multi-Label Classification 51 F1 Non-loopy baseline (logistic regression for each label) 81.6 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 84.0 Softmax-margin (loss-aware) 83.8

Multi-Label Classification 52 F1 Non-loopy baseline (logistic regression for each label) 81.6 Loopy CRF models (inference via loopy sum-prod BP) Maximum-likelihood training (with approximate inference) 84.0 Softmax-margin (loss-aware) 83.8 ERMA (loss- and approximation-aware) 84.6 *Boldfaced results are significantly better than all others (p < 0.05)

Summary 53 Congressional Vote Modeling (Accuracy) Semi-str. Inf. Extraction (F1) Multi-label Classification (F1) Non-loopy baseline Loopy CRF models Maximum-likelihood training ERMA ERMA also helps on a range of synthetic data graphical model problems (AISTATS'11 paper).

ERMA training 54

Back-Propagation of Error for Empirical Risk Minimization Back propagation of error (automatic differentiation in the reverse mode) to compute gradients of the loss with respect to θ. Gradient-based local optimization method to find the θ * that (locally) minimizes the training loss. 55 x L(y*, ŷ ) Black box decision function parameterized by ϴ

Back-Propagation of Error for Empirical Risk Minimization Back propagation of error (automatic differentiation in the reverse mode) to compute gradients of the loss with respect to θ. Gradient-based local optimization method to find the θ * that (locally) minimizes the training loss. 56 x L(y*, ŷ )

Back-Propagation of Error for Empirical Risk Minimization Back propagation of error (automatic differentiation in the reverse mode) to compute gradients of the loss with respect to θ. Gradient-based local optimization method to find the θ * that (locally) minimizes the training loss. 57 x L(y*, ŷ ) Neural network

Back-Propagation of Error for Empirical Risk Minimization Back propagation of error (automatic differentiation in the reverse mode) to compute gradients of the loss with respect to θ. Gradient-based local optimization method to find the θ * that (locally) minimizes the training loss. 58 x L(y*, ŷ ) Neural network

Y1Y1 Y2Y2 Y4Y4 Y3Y3 X1X1 X1X1 X2X2 X2X2 X3X3 X3X3 Back-Propagation of Error for Empirical Risk Minimization Back propagation of error (automatic differentiation in the reverse mode) to compute gradients of the loss with respect to θ. Gradient-based local optimization method to find the θ * that (locally) minimizes the training loss. 59 x L(y*, ŷ ) CRF System

Error Back-Propagation 60

Error Back-Propagation 61

Error Back-Propagation 62

Error Back-Propagation 63

Error Back-Propagation 64

Error Back-Propagation 65

Error Back-Propagation 66

Error Back-Propagation 67

Error Back-Propagation 68

Error Back-Propagation 69 Vote Reid bill77 P(Vote Reid bill77 =Yeah|x) m(y 1  y 2 )=m(y 3  y 1 )*m(y 4  y 1 ) ϴ

Error Back-Propagation Applying the chain rule of derivation over and over. Forward pass: – Regular computation (inference + decoding) in the model (+ remember intermediate quantities). Backward pass: – Replay the forward pass in reverse computing gradients. 70

Run inference and decoding: Inference (loopy BP) The Forward Pass θ messagesbeliefs Decoding output Loss L 71

Replay the computation backward calculating gradients: Inference (loopy BP) The Backward Pass θ messagesbeliefs Decoding output Loss L ð (L)=1 ð (output) ð (f)=  L/  f ð (messages) ð (beliefs) ð(θ)ð(θ) 72

Gradient-Based Optimization Use a local optimizer to find θ* that minimize training loss. In practice, we use a second-order method, Stochastic Meta Descent [Schradoulph, 1999]. – Some more automatic differentiation magic needed to compute vector-Hessian products. Both gradient and vector-Hessian computation have the same complexity as the forward pass (small constant factor). 73

Minimum-Risk Training of Approximate CRF-Based NLP Systems ERMA leads to surprisingly large gains improving the state of the art on 3 problems You should try rich CRF models for YOUR application 74 Approximation-aware NoYes Loss-aware No Yes SVM struct M 3 N Softmax- margin ERMA -Even if you have to approximate -Just train to minimize loss given the approximations! -Using our ERMA software. MLE

What can ERMA do for you? Future Work Learn speed-aware models for fast test-time inference Learn evidence-specific structures Applications to relational data 75 Erma software package available at

Thank you. Questions? 76

Deterministic Annealing Some loss functions are not differentiable (e.g., accuracy) Some inference methods are not differentiable (e.g., max-product BP). Replace Max with Softmax and anneal. 77

Linear-Chain CRFs for Sequences Defined in terms of potentials functions for transitions f j (y i-1,y i ) and emissions f j (x i,y i ): x1x1 x2x2 x3x3 x4x4 Y1Y1 Y2Y2 Y3Y3 Y4Y4 78

Synthetic Data Generate a CRF at random – Structure – Parameters Use Gibbs sampling to generate data Forget the parameters (but not the structure) Learn the parameters from the sampled data Evaluate using one of four loss functions Total of 12 models of different size and connectivity 79

Synthetic Data: Results Test LossTrain Loss Δ Losswins ⦁ ties ⦁ losses MSE ApprLogL.71 MSE ⦁ 0 ⦁ 0 Accuracy ApprLogL. 75 Accuracy ⦁ 0 ⦁ 1 F-Score ApprLogL1.17 F-Score ⦁ 2 ⦁ 0 ApprLogL

Synthetic Data: Introducing Structure Mismatch 81

Synthetic Data: Varying Approximation Quality 82

t 4 =t 2 *t 3  V/  t 3 = (  V/  t 4 )*(  t 4 /  t 3 )  V/  t 3 =1*t 2 Automatic Differentiation in the Reverse Mode f(x,y) = xy 2  f/  x=?  f/  y=? x t 3 =x t 4 =xy 2 y t 1 =y t2=y2t2=y2 ð (g)=  V/  g ^ * ð x=t 2 ð t 3 =t 2 ð t 4 =1 ð y= 2t 1 t 3 ð t 1 =2t 1 t 3 ð t 2 = t3 83