Download presentation
Presentation is loading. Please wait.
1
KEY CHALLENGES Overview POS Tagging
Heng Ji January 22, 2018
2
Key NLP Components Baseline Search Math basics, Information Retrieval
Shallow Document Understanding Lexical Analysis, Part-of-Speech Tagging, Parsing Deep Document Understanding Syntactic Parsing Semantic Role Labeling Dependency Parsing Name Tagging Coreference Resolution Relation Extraction Temporal Information Extraction Event Extraction Knowledge Base Construction, Population and Utilization
3
Why NLP is Difficult: Ambiguity
Favorite Headlines Teacher Strikes Idle Kids Stolen Painting Found by Tree Kids Make Nutritious Snacks Local HS Dropouts Cut in Half Red Tape Holds Up New Bridges Man Struck by Lightning Faces Battery Charge Hospitals Are Sued by 7 Foot Doctors
4
How can a machine understand these differences?
Get the cat with the gloves.
5
Ambiguity Computational linguists are obsessed with ambiguity
Ambiguity is a fundamental problem of computational linguistics Resolving ambiguity is a crucial goal
6
Remaining Challenges: Blame Others First
Fundamental language problem – ambiguity and variety Coreference, coreference, coreference… After successful karting career in Europe, Perera became part of the Toyota F1 Young Drivers Development Program and was a Formula One test driver for the Japanese company in 2006. Paraphrase, paraphrase, paraphrase… “employee/member”: Sutil, a trained pianist, tested for Midland in 2006 and raced for Spyker in 2007 where he scored one point in the Japanese Grand Prix. Jennifer Dunn was the face of the Washington state Republican Party for more than two decades Inference, Inference, Inference… The list says that the state is owed $2,665,305 in personal income taxes by singer Dionne Warwick of South Orange, N.J., with the tax lien dating back to does she live in NJ? 6
7
Remaining Challenges: Deep Semantic Knowledge
It was a pool report typo. Here is exact Rhodes quote: ”this is not gonna be a couple of weeks. It will be a period of days.” At a WH briefing here in Santiago, NSA spox Rhodes came with a litany of pushback on idea WH didn’t consult with Congress. Rhodes singled out a Senate resolution that passed on March 1st which denounced Khaddafy’s atrocities. WH says UN rez incorporates it Ben Rhodes (Speech Writer) Go beyond sentence level 7
8
Remaining Challenges: Commonsense Knowledge
In his first televised address since the attack ended on Thursday, Kenyatta condemned the "barbaric slaughter" and asked help from the Muslim community in rooting out radical elements. 8
9
Morphs in Social Media = = Morph Target Motivation “Conquer West King”
(平西王) “Bo Xilai” (薄熙来) “Baby” (宝宝) “Wen Jiabao” (温家宝) Morph Target Motivation Blind Man (瞎子) Chen Guangcheng(陈光诚) Sensitive First Emperor (元祖) Mao Zedong (毛泽东) Vivid Kimchi Country (泡菜国) Korea (韩国) Rice Country (米国) United States (美国) Pronunciation Kim Third Fat (金三胖) Kim Jong-un (金正恩) Negative Miracle Brother (奇迹哥) Wang Yongping (王勇平) Irony There is even more uncertainty when we are extracting information from the data under active censorship. E.g. we will see frequently some morphed entities such as peace west king, or best actor in chinese tweets. But they are really referring to the politicians. Because people have to invent creative way to communicate sensitive ideas. The reason they used peace west king is because this king and the politican governed the same place hundreds years ago. Of course to avoid censorship is not the only motivation of creating morphs, sometimes people just create them for fun or sarcasm. Express sarcasm/verbal irony or positive sentiment The above chat message example is extremely challenging due to the lack of background contexts. Also it’s probably difficult to get a substantial amount of similar real data sets. However, there are other data sets under active censorship which may be used for a pilot study. For example, Chinese twitter and discussion forum users need to create new ways to communicate sensitive subjects because of the existence of internet information censorship. We call this phenomenon information morph. For example, when Chinese online users talk about the former politician ``Bo Xilai", they use a morph ``Conquer West King" instead, a historical figure four hundreds years ago who governed the same region as Bo. Morph can be considered as a special case of alias used for hiding true entities in malicious environment. Usually morphs are generated by harvesting the collective wisdom of the crowd to achieve certain communication goals. Aside from the purpose of avoiding censorship, other motivations for using morph include expressing sarcasm/irony, positive/negative sentiment or making descriptions more vivid toward some entities or events. We propose the following new implicit RA task:
10
Outline POS Tagging and HMM Formal Grammars Parsing and CKY Algorithm
Context-free grammar Grammars for English Treebanks Parsing and CKY Algorithm To be simple or to be useful?
11
What is Part-of-Speech (POS)
Generally speaking, Word Classes (=POS) : Verb, Noun, Adjective, Adverb, Article, … We can also include inflection: Verbs: Tense, number, … Nouns: Number, proper/common, … Adjectives: comparative, superlative, … …
12
Parts of Speech 8 (ish) traditional parts of speech
Noun, verb, adjective, preposition, adverb, article, interjection, pronoun, conjunction, etc Called: parts-of-speech, lexical categories, word classes, morphological classes, lexical tags... Lots of debate within linguistics about the number, nature, and universality of these We’ll completely ignore this debate.
13
7 Traditional POS Categories
N noun chair, bandwidth, pacing V verb study, debate, munch ADJ adj purple, tall, ridiculous ADV adverb unfortunately, slowly, P preposition of, by, to PRO pronoun I, me, mine DET determiner the, a, that, those
14
POS Tagging The process of assigning a part-of-speech or lexical class marker to each word in a collection. WORD tag the DET koala N put V the DET keys N on P table N
15
Penn TreeBank POS Tag Set
Penn Treebank: hand-annotated corpus of Wall Street Journal, 1M words 46 tags Some particularities: to /TO not disambiguated Auxiliaries and verbs not distinguished
16
Penn Treebank Tagset
17
Why POS tagging is useful?
Speech synthesis: How to pronounce “lead”? INsult inSULT OBject obJECT OVERflow overFLOW DIScount disCOUNT CONtent conTENT Stemming for information retrieval Can search for “aardvarks” get “aardvark” Parsing and speech recognition and etc Possessive pronouns (my, your, her) followed by nouns Personal pronouns (I, you, he) likely to be followed by verbs Need to know if a word is an N or V before you can parse Information extraction Finding names, relations, etc. Machine Translation
18
Open and Closed Classes
Closed class: a small fixed membership Prepositions: of, in, by, … Auxiliaries: may, can, will had, been, … Pronouns: I, you, she, mine, his, them, … Usually function words (short common words which play a role in grammar) Open class: new ones can be created all the time English has 4: Nouns, Verbs, Adjectives, Adverbs Many languages have these 4, but not all!
19
Open Class Words Nouns Adverbs: tend to modify things Verbs
Proper nouns (Boulder, Granby, Eli Manning) English capitalizes these. Common nouns (the rest). Count nouns and mass nouns Count: have plurals, get counted: goat/goats, one goat, two goats Mass: don’t get counted (snow, salt, communism) (*two snows) Adverbs: tend to modify things Unfortunately, John walked home extremely slowly yesterday Directional/locative adverbs (here,home, downhill) Degree adverbs (extremely, very, somewhat) Manner adverbs (slowly, slinkily, delicately) Verbs In English, have morphological affixes (eat/eats/eaten)
20
Closed Class Words Examples: prepositions: on, under, over, …
particles: up, down, on, off, … determiners: a, an, the, … pronouns: she, who, I, .. conjunctions: and, but, or, … auxiliary verbs: can, may should, … numerals: one, two, three, third, …
21
Prepositions from CELEX
22
English Particles
23
Conjunctions
24
POS Tagging Choosing a Tagset
There are so many parts of speech, potential distinctions we can draw To do POS tagging, we need to choose a standard set of tags to work with Could pick very coarse tagsets N, V, Adj, Adv. More commonly used set is finer grained, the “Penn TreeBank tagset”, 45 tags PRP$, WRB, WP$, VBG Even more fine-grained tagsets exist
25
Using the Penn Tagset The/DT grand/JJ jury/NN commmented/VBD on/IN a/DT number/NN of/IN other/JJ topics/NNS ./. Prepositions and subordinating conjunctions marked IN (“although/IN I/PRP..”) Except the preposition/complementizer “to” is just marked “TO”.
26
POS Tagging Words often have more than one POS: back
The back door = JJ On my back = NN Win the voters back = RB Promised to back the bill = VB The POS tagging problem is to determine the POS tag for a particular instance of a word. These examples from Dekang Lin
27
How Hard is POS Tagging? Measuring Ambiguity
28
Current Performance How many tags are correct? How well do people do?
About 97% currently But baseline is already 90% Baseline algorithm: Tag every word with its most frequent tag Tag unknown words as nouns How well do people do?
29
Quick Test: Agreement? the students went to class
plays well with others fruit flies like a banana DT: the, this, that NN: noun VB: verb P: prepostion ADV: adverb
30
Quick Test the students went to class DT NN VB P NN
plays well with others VB ADV P NN NN NN P DT fruit flies like a banana NN NN VB DT NN NN VB P DT NN NN NN P DT NN NN VB VB DT NN
31
How to do it? History 1960 1970 1980 1990 2000 Trigram Tagger (Kempe)
96%+ Combined Methods 98%+ DeRose/Church Efficient HMM Sparse Data 95%+ Tree-Based Statistics (Helmut Shmid) Rule Based – 96%+ Transformation Based Tagging (Eric Brill) Rule Based – 95%+ Greene and Rubin Rule Based - 70% HMM Tagging (CLAWS) 93%-95% Neural Network 96%+ 1960 1970 1980 1990 2000 Brown Corpus Created (EN-US) 1 Million Words Brown Corpus Tagged LOB Corpus Tagged Brown Corpus (1967): Henry Kucera & W. Nelson Francis 1967 1,000,000 words 500 sample texts from about 15 topics About half of the total vocabulary appear once POS tagging was added later, using first Greene and Rubin tagger (70% success rate), and was considered complete (as possible, only by late seventies) 80 POS tags, + indicators for compound form, contractions, foreign words, etc. Klein and Simmons (1963) (RB) Green And Rubin (1971) Rule based, 70% success rate Hindle (1989) (RB) Brill (1992) * Most probable tag + 2 heuristics for unknown words – 7.9% error. Rule based - British National Corpus (tagged by CLAWS) POS Tagging separated from other NLP LOB Corpus Created (EN-UK) 1 Million Words Penn Treebank Corpus (WSJ, 4.5M)
32
Two Methods for POS Tagging
Rule-based tagging (ENGTWOL) Stochastic Probabilistic sequence models HMM (Hidden Markov Model) tagging MEMMs (Maximum Entropy Markov Models)
33
Rule-Based Tagging Start with a dictionary
Assign all possible tags to words from the dictionary Write rules by hand to selectively remove tags Leaving the correct tag for each word.
34
Rule-based taggers Early POS taggers all hand-coded
Most of these (Harris, 1962; Greene and Rubin, 1971) and the best of the recent ones, ENGTWOL (Voutilainen, 1995) based on a two-stage architecture Stage 1: look up word in lexicon to give list of potential POSs Stage 2: Apply rules which certify or disallow tag sequences Rules originally handwritten; more recently Machine Learning methods can be used
35
Start With a Dictionary
she: PRP promised: VBN,VBD to TO back: VB, JJ, RB, NN the: DT bill: NN, VB Etc… for the ~100,000 words of English with more than 1 tag
36
Assign Every Possible Tag
NN RB VBN JJ VB PRP VBD TO VB DT NN She promised to back the bill
37
Write Rules to Eliminate Tags
Eliminate VBN if VBD is an option when VBN|VBD follows “<start> PRP” NN RB JJ VB PRP VBD TO VB DT NN She promised to back the bill VBN
38
POS tagging The involvement of ion channels in B and T lymphocyte activation is DT NN IN NN NNS IN NN CC NN NN NN VBZ supported by many reports of changes in ion fluxes and membrane VBN IN JJ NNS IN NNS IN NN NNS CC NN ……………………………………………………………………………………. training Unseen text Machine Learning Algorithm We demonstrate PRP VBP that … IN We demonstrate that …
39
Goal of POS Tagging Our Goal
We want the best set of tags for a sequence of words (a sentence) W — a sequence of words T — a sequence of tags Our Goal Example: P((NN NN P DET ADJ NN) | (heat oil in a large pot))
40
But, the Sparse Data Problem …
Rich Models often require vast amounts of data Count up instances of the string "heat oil in a large pot" in the training corpus, and pick the most common tag assignment to the string.. Too many possible combinations
41
POS Tagging as Sequence Classification
We are given a sentence (an “observation” or “sequence of observations”) Secretariat is expected to race tomorrow What is the best sequence of tags that corresponds to this sequence of observations? Probabilistic view: Consider all possible sequences of tags Out of this universe of sequences, choose the tag sequence which is most probable given the observation sequence of n words w1…wn.
42
Getting to HMMs We want, out of all sequences of n tags t1…tn the single tag sequence such that P(t1…tn|w1…wn) is highest. Hat ^ means “our estimate of the best one” Argmaxx f(x) means “the x such that f(x) is maximized”
43
Getting to HMMs This equation is guaranteed to give us the best tag sequence But how to make it operational? How to compute this value? Intuition of Bayesian classification: Use Bayes rule to transform this equation into a set of other probabilities that are easier to compute
44
Reminder: Apply Bayes’ Theorem (1763)
likelihood prior posterior Our Goal: To maximize it! marginal likelihood Reverend Thomas Bayes — Presbyterian minister ( )
45
How to Count P(W|T) and P(T) can be counted from a large
hand-tagged corpus; and smooth them to get rid of the zeroes
46
Count P(W|T) and P(T) Assume each word in the sequence depends only on its corresponding tag:
47
Count P(T) history Make a Markov assumption and use N-grams over tags ... P(T) is a product of the probability of N-grams that make it up
48
Part-of-speech tagging with Hidden Markov Models
tags words output probability transition probability
49
Analyzing Fish sleep.
50
A Simple POS HMM start noun verb end 0.8 0.2 0.7 0.1
51
Word Emission Probabilities P ( word | state )
A two-word language: “fish” and “sleep” Suppose in our training corpus, “fish” appears 8 times as a noun and 5 times as a verb “sleep” appears twice as a noun and 5 times as a verb Emission probabilities: Noun P(fish | noun) : 0.8 P(sleep | noun) : 0.2 Verb P(fish | verb) : 0.5 P(sleep | verb) : 0.5
52
Viterbi Probabilities
53
start noun verb end 0.8 0.2 0.7 0.1
54
start noun verb end 0.8 0.2 0.7 0.1 Token 1: fish
55
start noun verb end 0.8 0.2 0.7 0.1 Token 1: fish
56
start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is verb)
57
start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is verb)
58
end Token 2: sleep (if ‘fish’ is a noun) 0.8 0.2 0.7 0.1 start noun
verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is a noun)
59
end Token 2: sleep (if ‘fish’ is a noun) 0.8 0.2 0.7 0.1 start noun
verb end 0.8 0.2 0.7 0.1 Token 2: sleep (if ‘fish’ is a noun)
60
Token 2: sleep take maximum, set back pointers
start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep take maximum, set back pointers
61
Token 2: sleep take maximum, set back pointers
start noun verb end 0.8 0.2 0.7 0.1 Token 2: sleep take maximum, set back pointers
62
start noun verb end 0.8 0.2 0.7 0.1 Token 3: end
63
Token 3: end take maximum, set back pointers
start noun verb end 0.8 0.2 0.7 0.1 Token 3: end take maximum, set back pointers
64
Decode: fish = noun sleep = verb
start noun verb end 0.8 0.2 0.7 0.1 Decode: fish = noun sleep = verb
65
Markov Chain for a Simple Name Tagger
Bob:0.5 Transition Probability 0.6 Dylan:0.4 Emission Probability PER Albany:0.1 0.2 .:1.0 0.3 0.1 END START LOC 0.2 0.3 0.2 0.2 0.3 0.3 0.1 Dylan:0.2 0.5 0.2 From yesterday’s talk Generative: center of distribution; Discriminate: margin of distributions. Hmm matches sequential sequence better… Future…….T2N X Albany:0.8 Bob.:0.1 0.5 visited:0.9
66
Exercise Tag names in the following sentence:
Bob Dylan visited Albany.
67
POS taggers Brill’s tagger TnT tagger Stanford tagger SVMTool
TnT tagger Stanford tagger SVMTool GENIA tagger More complete list at:
68
Assignment
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.