KEY CHALLENGES Overview POS Tagging

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING PoS-Tagging theory and terminology COMP3310 Natural Language Processing.
Advertisements

CS460/IT632 Natural Language Processing/Language Technology for the Web Lecture 2 (06/01/06) Prof. Pushpak Bhattacharyya IIT Bombay Part of Speech (PoS)
CPSC 422, Lecture 16Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 16 Feb, 11, 2015.
Ch 9 Part of Speech Tagging (slides adapted from Dan Jurafsky, Jim Martin, Dekang Lin, Rada Mihalcea, and Bonnie Dorr and Mitch Marcus.) ‏
Chapter 8. Word Classes and Part-of-Speech Tagging From: Chapter 8 of An Introduction to Natural Language Processing, Computational Linguistics, and Speech.
BİL711 Natural Language Processing
Part-of-speech tagging. Parts of Speech Perhaps starting with Aristotle in the West (384–322 BCE) the idea of having parts of speech lexical categories,
Part of Speech Tagging Importance Resolving ambiguities by assigning lower probabilities to words that don’t fit Applying to language grammatical rules.
Natural Language Processing Lecture 8—9/24/2013 Jim Martin.
LING 388 Language and Computers Lecture 22 11/25/03 Sandiway FONG.
Hidden Markov Model (HMM) Tagging  Using an HMM to do POS tagging  HMM is a special case of Bayesian inference.
Part II. Statistical NLP Advanced Artificial Intelligence Part of Speech Tagging Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme Most.
Shallow Processing: Summary Shallow Processing Techniques for NLP Ling570 December 7, 2011.
Learning Bit by Bit Hidden Markov Models. Weighted FSA weather The is outside
Ch 10 Part-of-Speech Tagging Edited from: L. Venkata Subramaniam February 28, 2002.
POS based on Jurafsky and Martin Ch. 8 Miriam Butt October 2003.
Tagging – more details Reading: D Jurafsky & J H Martin (2000) Speech and Language Processing, Ch 8 R Dale et al (2000) Handbook of Natural Language Processing,
Part of speech (POS) tagging
Part-of-Speech Tagging & Sequence Labeling
BIOI 7791 Projects in bioinformatics Spring 2005 March 22 © Kevin B. Cohen.
Albert Gatt Corpora and Statistical Methods Lecture 9.
1 POS Tagging: Introduction Heng Ji Feb 2, 2008 Acknowledgement: some slides from Ralph Grishman, Nicolas Nicolov, J&M.
Lemmatization Tagging LELA /20 Lemmatization Basic form of annotation involving identification of underlying lemmas (lexemes) of the words in.
Part II. Statistical NLP Advanced Artificial Intelligence Applications of HMMs and PCFGs in NLP Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme.
Parts of Speech Sudeshna Sarkar 7 Aug 2008.
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
Sequence Models With slides by me, Joshua Goodman, Fei Xia.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
10/30/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 7 Giuseppe Carenini.
Word classes and part of speech tagging Chapter 5.
Speech and Language Processing Ch8. WORD CLASSES AND PART-OF- SPEECH TAGGING.
Tokenization & POS-Tagging
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging I Introduction Tagsets Approaches.
CSA3202 Human Language Technology HMMs for POS Tagging.
CPSC 422, Lecture 15Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15 Oct, 14, 2015.
Part-of-speech tagging
CPSC 422, Lecture 27Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 27 Nov, 16, 2015.
Word classes and part of speech tagging. Slide 1 Outline Why part of speech tagging? Word classes Tag sets and problem definition Automatic approaches.
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
Part-of-Speech Tagging & Sequence Labeling Hongning Wang
Overview of Statistical NLP IR Group Meeting March 7, 2006.
Word classes and part of speech tagging Chapter 5.
Part-of-Speech Tagging CSCI-GA.2590 – Lecture 4 Ralph Grishman NYU.
POS TAGGING AND HMM Tim Teks Mining Adapted from Heng Ji.
Speech and Language Processing SLP Chapter 5. 10/31/1 2 Speech and Language Processing - Jurafsky and Martin 2 Today  Parts of speech (POS)  Tagsets.
CSC 594 Topics in AI – Natural Language Processing
Lecture 9: Part of Speech
Approaches to Machine Translation
Sentiment analysis algorithms and applications: A survey
Statistical NLP: Lecture 3
CSC 594 Topics in AI – Natural Language Processing
Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 15
CSCI 5832 Natural Language Processing
CS4705 Part of Speech tagging
LING/C SC/PSYC 438/538 Lecture 20 Sandiway Fong.
CSC 594 Topics in AI – Natural Language Processing
CSC 594 Topics in AI – Natural Language Processing
CSCI 5832 Natural Language Processing
Machine Learning in Natural Language Processing
Natural Language Processing
CSCI 5832 Natural Language Processing
LING/C SC/PSYC 438/538 Lecture 23 Sandiway Fong.
Lecture 6: Part of Speech Tagging (II): October 14, 2004 Neal Snider
Lecture 7 HMMs – the 3 Problems Forward Algorithm
Approaches to Machine Translation
CS246: Information Retrieval
Natural Language Processing
Part-of-Speech Tagging Using Hidden Markov Models
Natural Language Processing (NLP)
CS249: Neural Language Model
Presentation transcript:

KEY CHALLENGES Overview POS Tagging Heng Ji jih@rpi.edu January 22, 2018

Key NLP Components Baseline Search Math basics, Information Retrieval Shallow Document Understanding Lexical Analysis, Part-of-Speech Tagging, Parsing Deep Document Understanding Syntactic Parsing Semantic Role Labeling Dependency Parsing Name Tagging Coreference Resolution Relation Extraction Temporal Information Extraction Event Extraction Knowledge Base Construction, Population and Utilization

Why NLP is Difficult: Ambiguity Favorite Headlines Teacher Strikes Idle Kids Stolen Painting Found by Tree Kids Make Nutritious Snacks Local HS Dropouts Cut in Half Red Tape Holds Up New Bridges Man Struck by Lightning Faces Battery Charge Hospitals Are Sued by 7 Foot Doctors

How can a machine understand these differences? Get the cat with the gloves.

Ambiguity Computational linguists are obsessed with ambiguity Ambiguity is a fundamental problem of computational linguistics Resolving ambiguity is a crucial goal

Remaining Challenges: Blame Others First Fundamental language problem – ambiguity and variety Coreference, coreference, coreference… After successful karting career in Europe, Perera became part of the Toyota F1 Young Drivers Development Program and was a Formula One test driver for the Japanese company in 2006. Paraphrase, paraphrase, paraphrase… “employee/member”: Sutil, a trained pianist, tested for Midland in 2006 and raced for Spyker in 2007 where he scored one point in the Japanese Grand Prix. Jennifer Dunn was the face of the Washington state Republican Party for more than two decades Inference, Inference, Inference… The list says that the state is owed $2,665,305 in personal income taxes by singer Dionne Warwick of South Orange, N.J., with the tax lien dating back to 1997.  does she live in NJ? 6

Remaining Challenges: Deep Semantic Knowledge It was a pool report typo. Here is exact Rhodes quote: ”this is not gonna be a couple of weeks. It will be a period of days.” At a WH briefing here in Santiago, NSA spox Rhodes came with a litany of pushback on idea WH didn’t consult with Congress. Rhodes singled out a Senate resolution that passed on March 1st which denounced Khaddafy’s atrocities. WH says UN rez incorporates it Ben Rhodes (Speech Writer) Go beyond sentence level 7

Remaining Challenges: Commonsense Knowledge In his first televised address since the attack ended on Thursday, Kenyatta condemned the "barbaric slaughter" and asked help from the Muslim community in rooting out radical elements. 8

Morphs in Social Media = = Morph Target Motivation “Conquer West King” (平西王) “Bo Xilai” (薄熙来) “Baby” (宝宝) “Wen Jiabao” (温家宝) Morph Target Motivation Blind Man (瞎子) Chen Guangcheng(陈光诚) Sensitive First Emperor (元祖) Mao Zedong (毛泽东) Vivid Kimchi Country (泡菜国) Korea (韩国) Rice Country (米国) United States (美国) Pronunciation Kim Third Fat (金三胖) Kim Jong-un (金正恩) Negative Miracle Brother (奇迹哥) Wang Yongping (王勇平) Irony There is even more uncertainty when we are extracting information from the data under active censorship. E.g. we will see frequently some morphed entities such as peace west king, or best actor in chinese tweets. But they are really referring to the politicians. Because people have to invent creative way to communicate sensitive ideas. The reason they used peace west king is because this king and the politican governed the same place hundreds years ago. Of course to avoid censorship is not the only motivation of creating morphs, sometimes people just create them for fun or sarcasm. Express sarcasm/verbal irony or positive sentiment The above chat message example is extremely challenging due to the lack of background contexts. Also it’s probably difficult to get a substantial amount of similar real data sets. However, there are other data sets under active censorship which may be used for a pilot study. For example, Chinese twitter and discussion forum users need to create new ways to communicate sensitive subjects because of the existence of internet information censorship. We call this phenomenon information morph. For example, when Chinese online users talk about the former politician ``Bo Xilai", they use a morph ``Conquer West King" instead, a historical figure four hundreds years ago who governed the same region as Bo. Morph can be considered as a special case of alias used for hiding true entities in malicious environment. Usually morphs are generated by harvesting the collective wisdom of the crowd to achieve certain communication goals. Aside from the purpose of avoiding censorship, other motivations for using morph include expressing sarcasm/irony, positive/negative sentiment or making descriptions more vivid toward some entities or events. We propose the following new implicit RA task:

Outline POS Tagging and HMM Formal Grammars Parsing and CKY Algorithm Context-free grammar Grammars for English Treebanks Parsing and CKY Algorithm To be simple or to be useful?

What is Part-of-Speech (POS) Generally speaking, Word Classes (=POS) : Verb, Noun, Adjective, Adverb, Article, … We can also include inflection: Verbs: Tense, number, … Nouns: Number, proper/common, … Adjectives: comparative, superlative, … …

Parts of Speech 8 (ish) traditional parts of speech Noun, verb, adjective, preposition, adverb, article, interjection, pronoun, conjunction, etc Called: parts-of-speech, lexical categories, word classes, morphological classes, lexical tags... Lots of debate within linguistics about the number, nature, and universality of these We’ll completely ignore this debate.

7 Traditional POS Categories N noun chair, bandwidth, pacing V verb study, debate, munch ADJ adj purple, tall, ridiculous ADV adverb unfortunately, slowly, P preposition of, by, to PRO pronoun I, me, mine DET determiner the, a, that, those

POS Tagging The process of assigning a part-of-speech or lexical class marker to each word in a collection. WORD tag the DET koala N put V the DET keys N on P table N

Penn TreeBank POS Tag Set Penn Treebank: hand-annotated corpus of Wall Street Journal, 1M words 46 tags Some particularities: to /TO not disambiguated Auxiliaries and verbs not distinguished

Penn Treebank Tagset

Why POS tagging is useful? Speech synthesis: How to pronounce “lead”? INsult inSULT OBject obJECT OVERflow overFLOW DIScount disCOUNT CONtent conTENT Stemming for information retrieval Can search for “aardvarks” get “aardvark” Parsing and speech recognition and etc Possessive pronouns (my, your, her) followed by nouns Personal pronouns (I, you, he) likely to be followed by verbs Need to know if a word is an N or V before you can parse Information extraction Finding names, relations, etc. Machine Translation

Open and Closed Classes Closed class: a small fixed membership Prepositions: of, in, by, … Auxiliaries: may, can, will had, been, … Pronouns: I, you, she, mine, his, them, … Usually function words (short common words which play a role in grammar) Open class: new ones can be created all the time English has 4: Nouns, Verbs, Adjectives, Adverbs Many languages have these 4, but not all!

Open Class Words Nouns Adverbs: tend to modify things Verbs Proper nouns (Boulder, Granby, Eli Manning) English capitalizes these. Common nouns (the rest). Count nouns and mass nouns Count: have plurals, get counted: goat/goats, one goat, two goats Mass: don’t get counted (snow, salt, communism) (*two snows) Adverbs: tend to modify things Unfortunately, John walked home extremely slowly yesterday Directional/locative adverbs (here,home, downhill) Degree adverbs (extremely, very, somewhat) Manner adverbs (slowly, slinkily, delicately) Verbs In English, have morphological affixes (eat/eats/eaten)

Closed Class Words Examples: prepositions: on, under, over, … particles: up, down, on, off, … determiners: a, an, the, … pronouns: she, who, I, .. conjunctions: and, but, or, … auxiliary verbs: can, may should, … numerals: one, two, three, third, …

Prepositions from CELEX

English Particles

Conjunctions

POS Tagging Choosing a Tagset There are so many parts of speech, potential distinctions we can draw To do POS tagging, we need to choose a standard set of tags to work with Could pick very coarse tagsets N, V, Adj, Adv. More commonly used set is finer grained, the “Penn TreeBank tagset”, 45 tags PRP$, WRB, WP$, VBG Even more fine-grained tagsets exist

Using the Penn Tagset The/DT grand/JJ jury/NN commmented/VBD on/IN a/DT number/NN of/IN other/JJ topics/NNS ./. Prepositions and subordinating conjunctions marked IN (“although/IN I/PRP..”) Except the preposition/complementizer “to” is just marked “TO”.

POS Tagging Words often have more than one POS: back The back door = JJ On my back = NN Win the voters back = RB Promised to back the bill = VB The POS tagging problem is to determine the POS tag for a particular instance of a word. These examples from Dekang Lin

How Hard is POS Tagging? Measuring Ambiguity

Current Performance How many tags are correct? How well do people do? About 97% currently But baseline is already 90% Baseline algorithm: Tag every word with its most frequent tag Tag unknown words as nouns How well do people do?

Quick Test: Agreement? the students went to class plays well with others fruit flies like a banana DT: the, this, that NN: noun VB: verb P: prepostion ADV: adverb

Quick Test the students went to class DT NN VB P NN plays well with others VB ADV P NN NN NN P DT fruit flies like a banana NN NN VB DT NN NN VB P DT NN NN NN P DT NN NN VB VB DT NN

How to do it? History 1960 1970 1980 1990 2000 Trigram Tagger (Kempe) 96%+ Combined Methods 98%+ DeRose/Church Efficient HMM Sparse Data 95%+ Tree-Based Statistics (Helmut Shmid) Rule Based – 96%+ Transformation Based Tagging (Eric Brill) Rule Based – 95%+ Greene and Rubin Rule Based - 70% HMM Tagging (CLAWS) 93%-95% Neural Network 96%+ 1960 1970 1980 1990 2000 Brown Corpus Created (EN-US) 1 Million Words Brown Corpus Tagged LOB Corpus Tagged Brown Corpus (1967): Henry Kucera & W. Nelson Francis 1967 1,000,000 words 500 sample texts from about 15 topics About half of the total vocabulary appear once POS tagging was added later, using first Greene and Rubin tagger (70% success rate), and was considered complete (as possible, only by late seventies) 80 POS tags, + indicators for compound form, contractions, foreign words, etc. Klein and Simmons (1963) (RB) Green And Rubin (1971) Rule based, 70% success rate Hindle (1989) (RB) Brill (1992) * Most probable tag + 2 heuristics for unknown words – 7.9% error. Rule based - British National Corpus (tagged by CLAWS) POS Tagging separated from other NLP LOB Corpus Created (EN-UK) 1 Million Words Penn Treebank Corpus (WSJ, 4.5M)

Two Methods for POS Tagging Rule-based tagging (ENGTWOL) Stochastic Probabilistic sequence models HMM (Hidden Markov Model) tagging MEMMs (Maximum Entropy Markov Models)

Rule-Based Tagging Start with a dictionary Assign all possible tags to words from the dictionary Write rules by hand to selectively remove tags Leaving the correct tag for each word.

Rule-based taggers Early POS taggers all hand-coded Most of these (Harris, 1962; Greene and Rubin, 1971) and the best of the recent ones, ENGTWOL (Voutilainen, 1995) based on a two-stage architecture Stage 1: look up word in lexicon to give list of potential POSs Stage 2: Apply rules which certify or disallow tag sequences Rules originally handwritten; more recently Machine Learning methods can be used

Start With a Dictionary she: PRP promised: VBN,VBD to TO back: VB, JJ, RB, NN the: DT bill: NN, VB Etc… for the ~100,000 words of English with more than 1 tag

Assign Every Possible Tag NN RB VBN JJ VB PRP VBD TO VB DT NN She promised to back the bill

Write Rules to Eliminate Tags Eliminate VBN if VBD is an option when VBN|VBD follows “<start> PRP” NN RB JJ VB PRP VBD TO VB DT NN She promised to back the bill VBN

POS tagging The involvement of ion channels in B and T lymphocyte activation is DT NN IN NN NNS IN NN CC NN NN NN VBZ supported by many reports of changes in ion fluxes and membrane VBN IN JJ NNS IN NNS IN NN NNS CC NN ……………………………………………………………………………………. training Unseen text Machine Learning Algorithm We demonstrate PRP VBP that … IN We demonstrate that …

Goal of POS Tagging Our Goal We want the best set of tags for a sequence of words (a sentence) W — a sequence of words T — a sequence of tags Our Goal Example: P((NN NN P DET ADJ NN) | (heat oil in a large pot))

But, the Sparse Data Problem … Rich Models often require vast amounts of data Count up instances of the string "heat oil in a large pot" in the training corpus, and pick the most common tag assignment to the string.. Too many possible combinations

POS Tagging as Sequence Classification We are given a sentence (an “observation” or “sequence of observations”) Secretariat is expected to race tomorrow What is the best sequence of tags that corresponds to this sequence of observations? Probabilistic view: Consider all possible sequences of tags Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn.

Getting to HMMs We want, out of all sequences of n tags t1…tn the single tag sequence such that P(t1…tn|w1…wn) is highest. Hat ^ means “our estimate of the best one” Argmaxx f(x) means “the x such that f(x) is maximized”

Getting to HMMs This equation is guaranteed to give us the best tag sequence But how to make it operational? How to compute this value? Intuition of Bayesian classification: Use Bayes rule to transform this equation into a set of other probabilities that are easier to compute

Reminder: Apply Bayes’ Theorem (1763) likelihood prior posterior Our Goal: To maximize it! marginal likelihood Reverend Thomas Bayes — Presbyterian minister (1702-1761)

How to Count P(W|T) and P(T) can be counted from a large hand-tagged corpus; and smooth them to get rid of the zeroes

Count P(W|T) and P(T) Assume each word in the sequence depends only on its corresponding tag:

Count P(T) history Make a Markov assumption and use N-grams over tags ... P(T) is a product of the probability of N-grams that make it up

Part-of-speech tagging with Hidden Markov Models tags words output probability transition probability

Analyzing Fish sleep.

A Simple POS HMM start noun verb end 0.8 0.2 0.7 0.1

Word Emission Probabilities P ( word | state ) A two-word language: “fish” and “sleep” Suppose in our training corpus, “fish” appears 8 times as a noun and 5 times as a verb “sleep” appears twice as a noun and 5 times as a verb Emission probabilities: Noun P(fish | noun) : 0.8 P(sleep | noun) : 0.2 Verb P(fish | verb) : 0.5 P(sleep | verb) : 0.5

Viterbi Probabilities

start noun verb end 0.8 0.2 0.7 0.1

start noun verb end 0.8 0.2 0.7 0.1 Token 1: fish

start noun verb end 0.8 0.2 0.7 0.1 Token 1: fish

start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is verb)

start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is verb)

end Token 2: sleep (if ‘fish’ is a noun) 0.8 0.2 0.7 0.1 start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is a noun)

end Token 2: sleep (if ‘fish’ is a noun) 0.8 0.2 0.7 0.1 start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is a noun)

Token 2: sleep take maximum, set back pointers start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep take maximum, set back pointers

Token 2: sleep take maximum, set back pointers start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep take maximum, set back pointers

start noun verb end 0.8 0.2 0.7 0.1 Token 3: end

Token 3: end take maximum, set back pointers start noun verb end 0.8 0.2 0.7 0.1 Token 3: end take maximum, set back pointers

Decode: fish = noun sleep = verb start noun verb end 0.8 0.2 0.7 0.1 Decode: fish = noun sleep = verb

Markov Chain for a Simple Name Tagger Bob:0.5 Transition Probability 0.6 Dylan:0.4 Emission Probability PER Albany:0.1 0.2 .:1.0 0.3 0.1 END START LOC 0.2 0.3 0.2 0.2 0.3 0.3 0.1 Dylan:0.2 0.5 0.2 From yesterday’s talk Generative: center of distribution; Discriminate: margin of distributions. Hmm matches sequential sequence better… Future…….T2N X Albany:0.8 Bob.:0.1 0.5 visited:0.9

Exercise Tag names in the following sentence: Bob Dylan visited Albany.

POS taggers Brill’s tagger TnT tagger Stanford tagger SVMTool http://www.cs.jhu.edu/~brill/ TnT tagger http://www.coli.uni-saarland.de/~thorsten/tnt/ Stanford tagger http://nlp.stanford.edu/software/tagger.shtml SVMTool http://www.lsi.upc.es/~nlp/SVMTool/ GENIA tagger http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/tagger/ More complete list at: http://www-nlp.stanford.edu/links/statnlp.html#Taggers

Assignment http://nlp.cs.rpi.edu/course/spring18/assignment1.pdf