Named Entity Tagging Thanks to Dan Jurafsky, Jim Martin, Ray Mooney, Tom Mitchell for slides.

Slides:



Advertisements
Similar presentations
Three Basic Problems Compute the probability of a text: P m (W 1,N ) Compute maximum probability tag sequence: arg max T 1,N P m (T 1,N | W 1,N ) Compute.
Advertisements

Why does it work? We have not addressed the question of why does this classifier performs well, given that the assumptions are unlikely to be satisfied.
Supervised Learning Recap
Part-of-speech tagging. Parts of Speech Perhaps starting with Aristotle in the West (384–322 BCE) the idea of having parts of speech lexical categories,
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Chapter 6: HIDDEN MARKOV AND MAXIMUM ENTROPY Heshaam Faili University of Tehran.
CSCI 5417 Information Retrieval Systems Jim Martin Lecture 21 11/8/2011.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Logistics Course reviews Project report deadline: March 16 Poster session guidelines: – 2.5 minutes per poster (3 hrs / 55 minus overhead) – presentations.
Announcements  Project proposal is due on 03/11  Three seminars this Friday (EB 3105) Dealing with Indefinite Representations in Pattern Recognition.
Maximum Entropy Model LING 572 Fei Xia 02/07-02/09/06.
Linear Discriminant Functions Chapter 5 (Duda et al.)
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Maximum Entropy Model LING 572 Fei Xia 02/08/07. Topics in LING 572 Easy: –kNN, Rocchio, DT, DL –Feature selection, binarization, system combination –Bagging.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
STRUCTURED PERCEPTRON Alice Lai and Shi Zhi. Presentation Outline Introduction to Structured Perceptron ILP-CRF Model Averaged Perceptron Latent Variable.
1 Sequence Labeling Raymond J. Mooney University of Texas at Austin.
Graphical models for part of speech tagging
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
Named Entity Tagging Thanks to Dan Jurafsky, Jim Martin, Ray Mooney, Tom Mitchell for slides.
Data Mining Practical Machine Learning Tools and Techniques Chapter 4: Algorithms: The Basic Methods Section 4.6: Linear Models Rodney Nielsen Many of.
Lecture 13 Information Extraction Topics Name Entity Recognition Relation detection Temporal and Event Processing Template Filling Readings: Chapter 22.
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Hidden Markov Models in Keystroke Dynamics Md Liakat Ali, John V. Monaco, and Charles C. Tappert Seidenberg School of CSIS, Pace University, White Plains,
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
Maximum Entropy (ME) Maximum Entropy Markov Model (MEMM) Conditional Random Field (CRF)
Machine Learning CUNY Graduate Center Lecture 4: Logistic Regression.
Maximum Entropy Models and Feature Engineering CSCI-GA.2590 – Lecture 6B Ralph Grishman NYU.
MAXIMUM ENTROPY MARKOV MODEL Adapted From: Heshaam Faili University of Tehran – Dikkala Sai Nishanth – Ashwin P. Paranjape
John Lafferty Andrew McCallum Fernando Pereira
HMM vs. Maximum Entropy for SU Detection Yang Liu 04/27/2004.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Information Extraction Entity Extraction: Statistical Methods Sunita Sarawagi.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Machine Learning: A Brief Introduction Fu Chang Institute of Information Science Academia Sinica ext. 1819
Machine Learning Lecture 1: Intro + Decision Trees Moshe Koppel Slides adapted from Tom Mitchell and from Dan Roth.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
Dan Roth University of Illinois, Urbana-Champaign 7 Sequential Models Tutorial on Machine Learning in Natural.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
CMPS 142/242 Review Section Fall 2011 Adapted from Lecture Slides.
Natural Language Processing Information Extraction Jim Martin (slightly modified by Jason Baldridge)
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Structured prediction
Maximum Entropy Models and Feature Engineering CSCI-GA.2591
CSC 594 Topics in AI – Natural Language Processing
Lecture 04: Logistic Regression
CSCI 5832 Natural Language Processing
CSCE 590 Web Scraping – Information Retrieval
CSCI 5832 Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
Hidden Markov Models Part 2: Algorithms
Collaborative Filtering Matrix Factorization Approach
Statistical Models for Automatic Speech Recognition
Lecture 13 Information Extraction
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
CSCI 5832 Natural Language Processing
Named Entity Tagging Thanks to Dan Jurafsky, Jim Martin, Ray Mooney, Tom Mitchell for slides.
ML – Lecture 3B Deep NN.
LECTURE 23: INFORMATION THEORY REVIEW
Parametric Methods Berlin Chen, 2005 References:
CSCI 5832 Natural Language Processing
Multivariate Methods Berlin Chen
Multivariate Methods Berlin Chen, 2005 References:
Attention for translation
Logistic Regression [Many of the slides were originally created by Prof. Dan Jurafsky from Stanford.]
Naïve Bayes Classifier
Presentation transcript:

Named Entity Tagging Thanks to Dan Jurafsky, Jim Martin, Ray Mooney, Tom Mitchell for slides

Outline Named Entities and the basic idea IOB Tagging A new classifier: Logistic Regression Linear regression Logistic regression Multinomial logistic regression = MaxEnt Why classifiers aren’t as good as sequence models A new sequence model: MEMM = Maximum Entropy Markov Model

Named Entity Tagging CHICAGO (AP) — Citing high fuel prices, United Airlines said Friday it has increased fares by $6 per round trip on flights to some cities also served by lower-cost carriers. American Airlines, a unit AMR, immediately matched the move, spokesman Tim Wagner said. United, a unit of UAL, said the increase took effect Thursday night and applies to most routes where it competes against discount carriers, such as Chicago to Dallas and Atlanta and Denver to San Francisco, Los Angeles and New York. Slide from Jim Martin

Named Entity Tagging CHICAGO (AP) — Citing high fuel prices, United Airlines said Friday it has increased fares by $6 per round trip on flights to some cities also served by lower-cost carriers. American Airlines, a unit AMR, immediately matched the move, spokesman Tim Wagner said. United, a unit of UAL, said the increase took effect Thursday night and applies to most routes where it competes against discount carriers, such as Chicago to Dallas and Atlanta and Denver to San Francisco, Los Angeles and New York. Slide from Jim Martin

Named Entity Recognition Find the named entities and classify them by type Typical approach Acquire training data Encode using IOB labeling Train a sequential supervised classifier Augment with pre- and post-processing using available list resources (census data, gazetteers, etc.) Slide from Jim Martin

Temporal and Numerical Expressions Temporals Find all the temporal expressions Normalize them based on some reference point Numerical Expressions Find all the expressions Classify by type Normalize Slide from Jim Martin

NE Types Slide from Jim Martin

NE Types: Examples Slide from Jim Martin

Ambiguity Slide from Jim Martin

Biomedical Entities Disease Symptom Drug Body Part Treatment Enzime Protein Difficulty: discontiguous or overlapping mentions Abdomen is soft, nontender, nondistended, negative bruits

NER Approaches As with partial parsing and chunking there are two basic approaches (and hybrids) Rule-based (regular expressions) Lists of names Patterns to match things that look like names Patterns to match the environments that classes of names tend to occur in. ML-based approaches Get annotated training data Extract features Train systems to replicate the annotation Slide from Jim Martin

ML Approach Slide from Jim Martin

Encoding for Sequence Labeling We can use IOB encoding: …United Airlines said Friday it has increased B_ORG I_ORG O O O O O the move , spokesman Tim Wagner said. O O O O B_PER I_PER O How many tags? For N classes we have 2*N+1 tags An I and B for each class and one O for no-class Each token in a text gets a tag Can use simpler IO tagging if what?

NER Features Slide from Jim Martin

Discriminative vs Generative Generative Model: Estimate full joint distribution P(y, x) Use Bayes rule to obtain P(y | x) or use argmax for classification: Discriminative model: Estimate P(y | x) in order to predict y from x 𝑦 = argmax 𝑦 𝑃(𝑦|𝑥)

How to do NE tagging? Classifiers Sequence Models Naïve Bayes Logistic Regression Sequence Models HMMs MEMMs CRFs Convolutional Neural Network Sequence models work better

Linear Regression Example from Freakonomics (Levitt and Dubner 2005) Fantastic/cute/charming versus granite/maple Can we predict price from # of adjs? # vague adjective Price increase 4 3 $1000 2 $1500 $6000 1 $14000 $18000

Linear Regression

Muliple Linear Regression Predicting values: In general: Let’s pretend an extra “intercept” feature f0 with value 1 Multiple Linear Regression

Learning in Linear Regression Consider one instance xj We would like to choose weights to minimize the difference between predicted and observed value for xj: This is an optimization problem that turns out to have a closed-form solution

Put the observed values in a vector y Formula that minimizes the cost: Put the weight from the training set into matrix X of observations f(i) Put the observed values in a vector y Formula that minimizes the cost: W = (XTX)−1XTy

Logistic Regression

Logistic Regression But in language problems we are doing classification Predicting one of a small set of discrete values Could we just use linear regression for this? 𝑃 𝑦=𝑡𝑟𝑢𝑒 𝑥 = 𝑖=0 𝑁 𝑤 𝑖 × 𝑓 𝑖

Logistic regression Not possible: the result doesn’t fall between 0 and 1 Instead of predicting prob, predict ratio of probs: but still not good: does not lie between 0 and 1 So how about if we predict the log: 𝑃 𝑦=𝑡𝑟𝑢𝑒 𝑥 = 𝑖=0 𝑁 𝑤 𝑖 × 𝑓 𝑖

Logistic regression Solving this for p(y=true)

Logistic function logit −1 (𝑥) = 𝑒 𝑥 1− 𝑒 𝑥 Inverse, aka Sigmoid, maps p to range [0-1] logit −1 (𝑥) = 𝑒 𝑥 1− 𝑒 𝑥

Logistic Regression How do we do classification? Or: Or, in explicit sum notation: 𝑒 𝑤∙𝑓 >1 𝑤∙𝑓>0 𝑖=0 𝑁 𝑤 𝑖 𝑓 𝑖 >0

Multinomial logistic regression Multiple classes: One change: indicator functions f(c,x) instead of real values

Estimating the weights Generalized Iterative Scaling (GIS) (Darroch and Ratcliff, 1972) Improved Iterative Scaling (IIS) (Della Pietra et al., 1995)

GIS: setup Requirements for running GIS: Obey form of model and constraints: An additional constraint: Add a new feature fk+1:

GIS algorithm Compute dj, j=1, …, k+1 Initialize (any values, e.g., 0) Repeat until converge for each j compute where update

Features

Summary so far Naïve Bayes Classifier Logistic Regression Classifier Also called Maximum Entropy classifier

How do we apply classification to sequences?

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier NNP Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier DT Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier NN Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier CC Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier TO Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier VB Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier PRP Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier IN Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier DT Slide from Ray Mooney

Sequence Labeling as Classification Classify each token independently but use as input features, information about the surrounding tokens (sliding window). John saw the saw and decided to take it to the table. classifier NN Slide from Ray Mooney

Using Outputs as Inputs Better input features are usually the categories of the surrounding tokens, but these are not available yet Can use category of either the preceding or succeeding tokens by going forward or back and using previous output Slide from Ray Mooney

Forward Classification John saw the saw and decided to take it to the table. classifier NNP Slide from Ray Mooney

Forward Classification NNP John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Forward Classification NNP VBD John saw the saw and decided to take it to the table. classifier DT Slide from Ray Mooney

Forward Classification NNP VBD DT John saw the saw and decided to take it to the table. classifier NN Slide from Ray Mooney

Forward Classification NNP VBD DT NN John saw the saw and decided to take it to the table. classifier CC Slide from Ray Mooney

Forward Classification NNP VBD DT NN CC John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Forward Classification NNP VBD DT NN CC VBD John saw the saw and decided to take it to the table. classifier TO Slide from Ray Mooney

Forward Classification NNP VBD DT NN CC VBD TO John saw the saw and decided to take it to the table. classifier VB Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. DT NN John saw the saw and decided to take it to the table. classifier IN Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. IN DT NN John saw the saw and decided to take it to the table. classifier PRP Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. PRP IN DT NN John saw the saw and decided to take it to the table. classifier VB Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier TO Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. VBD TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier CC Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. CC VBD TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. VBD CC VBD TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier DT Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. DT VBD CC VBD TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier VBD Slide from Ray Mooney

Backward Classification Disambiguating “to” in this case would be even easier backward. VBD DT VBD CC VBD TO VB PRP IN DT NN John saw the saw and decided to take it to the table. classifier NNP Slide from Ray Mooney

NER as Sequence Labeling

Why classifiers are not as good as sequence models

Problems with using Classifiers for Sequence Labeling It is not easy to integrate information from hidden labels on both sides We make a hard decision on each token We should rather choose a global optimum The best labeling for the whole sequence Keeping each local decision as just a probability, not a hard decision

Probabilistic Sequence Models Probabilistic sequence models allow integrating uncertainty over multiple, interdependent classifications and collectively determine the most likely global assignment Common approaches Hidden Markov Model (HMM) Conditional Random Field (CRF) Maximum Entropy Markov Model (MEMM) is a simplified version of CRF Convolutional Neural Networks (CNN)

HMMs vs. MEMMs Slide from Jim Martin

HMMs vs. MEMMs Slide from Jim Martin

HMMs vs. MEMMs Slide from Jim Martin

HMM vs MEMM

Viterbi in MEMMs We condition on the observation AND the previous state: HMM decoding: Which is the HMM version of: MEMM decoding:

Decoding in MEMMs

Evaluation Metrics

Precision Precision: how many of the names we returned are really names? Recall: how many of the names in the database did we find?

F-measure F-measure is a way to combine these: More generally:

F-measure Harmonic mean is the reciprocal of arthithmetic mean of reciprocals: Hence F-measure is:

Outline Named Entities and the basic idea IOB Tagging A new classifier: Logistic Regression Linear regression Logistic regression Multinomial logistic regression = MaxEnt Why classifiers are not as good as sequence models A new sequence model: MEMM = Maximum Entropy Markov Model