1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 15, 2004.

Slides:



Advertisements
Similar presentations
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Machine Learning PoS-Taggers COMP3310 Natural Language Processing Eric.
Advertisements

School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Word-counts, visualizations and N-grams Eric Atwell, Language Research.
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING PoS-Tagging theory and terminology COMP3310 Natural Language Processing.
Language Modeling: Ngrams
1 CS 388: Natural Language Processing: N-Gram Language Models Raymond J. Mooney University of Texas at Austin.
N-Grams and Corpus Linguistics 6 July Linguistics vs. Engineering “But it must be recognized that the notion of “probability of a sentence” is an.
February 2007CSA3050: Tagging II1 CSA2050: Natural Language Processing Tagging 2 Rule-Based Tagging Stochastic Tagging Hidden Markov Models (HMMs) N-Grams.
LINGUISTICA GENERALE E COMPUTAZIONALE DISAMBIGUAZIONE DELLE PARTI DEL DISCORSO.
CMSC 723 / LING 645: Intro to Computational Linguistics February 25, 2004 Lecture 5 (Dorr): Intro to Probabilistic NLP and N-grams (chap ) Prof.
CS 4705 N-Grams and Corpus Linguistics Julia Hirschberg CS 4705.
CSC 9010: Special Topics, Natural Language Processing. Spring, Matuszek & Papalaskari 1 N-Grams CSC 9010: Special Topics. Natural Language Processing.
1 N-Grams and Corpus Linguistics September 2009 Lecture #5.
N-Grams and Corpus Linguistics.  Regular expressions for asking questions about the stock market from stock reports  Due midnight, Sept. 29 th  Use.
1 I256: Applied Natural Language Processing Marti Hearst Sept 13, 2006.
Hidden Markov Model (HMM) Tagging  Using an HMM to do POS tagging  HMM is a special case of Bayesian inference.
N-Grams and Corpus Linguistics
1 CS188 Guest Lecture: Statistical Natural Language Processing Prof. Marti Hearst School of Information Management & Systems
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 5 2 August 2007.
CS 4705 Lecture 13 Corpus Linguistics I. From Knowledge-Based to Corpus-Based Linguistics A Paradigm Shift begins in the 1980s –Seeds planted in the 1950s.
Page 1 Language Modeling. Page 2 Next Word Prediction From a NY Times story... Stocks... Stocks plunged this …. Stocks plunged this morning, despite a.
CS 4705 Lecture 6 N-Grams and Corpus Linguistics.
POS based on Jurafsky and Martin Ch. 8 Miriam Butt October 2003.
1 I256: Applied Natural Language Processing Marti Hearst Sept 20, 2006.
POS Tagging HMM Taggers (continued). Today Walk through the guts of an HMM Tagger Address problems with HMM Taggers, specifically unknown words.
N-Grams and Language Modeling
CS 4705 Lecture 15 Corpus Linguistics III. Training and Testing Probabilities come from a training corpus, which is used to design the model. –overly.
Syllabus Text Books Classes Reading Material Assignments Grades Links Forum Text Books עיבוד שפות טבעיות - שיעור חמישי POS Tagging Algorithms עידו.
I256 Applied Natural Language Processing Fall 2009 Lecture 6 Introduction of Graphical Models Part of speech tagging Barbara Rosario.
Part of speech (POS) tagging
Introduction to Language Models Evaluation in information retrieval Lecture 4.
CS 4705 N-Grams and Corpus Linguistics. Homework Use Perl or Java reg-ex package HW focus is on writing the “grammar” or FSA for dates and times The date.
LING 438/538 Computational Linguistics Sandiway Fong Lecture 18: 10/26.
Language Modeling Julia Hirschberg CS Approaches to Language Modeling Context-Free Grammars –Use in HTK Ngram Models.
CS 4705 N-Grams and Corpus Linguistics. Spelling Correction, revisited M$ suggests: –ngram: NorAm –unigrams: anagrams, enigmas –bigrams: begrimes –trigrams:
1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 13, 2004.
CS 4705 Lecture 14 Corpus Linguistics II. Relating Conditionals and Priors P(A | B) = P(A ^ B) / P(B) –Or, P(A ^ B) = P(A | B) P(B) Bayes Theorem lets.
1 I256: Applied Natural Language Processing Marti Hearst Sept 18, 2006.
SI485i : NLP Set 3 Language Models Fall 2012 : Chambers.
8/27/2015CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
1 N-Grams and Corpus Linguistics September 6, 2012 Lecture #4.
NGrams 09/16/2004 Instructor: Rada Mihalcea Note: some of the material in this slide set was adapted from an NLP course taught by Bonnie Dorr at Univ.
Formal Models of Language. Slide 1 Language Models A language model an abstract representation of a (natural) language phenomenon. an approximation to.
Natural Language Processing Language Model. Language Models Formal grammars (e.g. regular, context free) give a hard “binary” model of the legal sentences.
1 LIN6932 Spring 2007 LIN6932: Topics in Computational Linguistics Hana Filip Lecture 5: N-grams.
Lemmatization Tagging LELA /20 Lemmatization Basic form of annotation involving identification of underlying lemmas (lexemes) of the words in.
Part II. Statistical NLP Advanced Artificial Intelligence Applications of HMMs and PCFGs in NLP Wolfram Burgard, Luc De Raedt, Bernhard Nebel, Lars Schmidt-Thieme.
Lanugage Modeling Lecture 12 Spoken Language Processing Prof. Andrew Rosenberg.
Lecture 1, 7/21/2005Natural Language Processing1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 7 8 August 2007.
6. N-GRAMs 부산대학교 인공지능연구실 최성자. 2 Word prediction “I’d like to make a collect …” Call, telephone, or person-to-person -Spelling error detection -Augmentative.
CS 4705 Hidden Markov Models Julia Hirschberg CS4705.
N-Grams and Corpus Linguistics guest lecture by Dragomir Radev
Fall 2005 Lecture Notes #8 EECS 595 / LING 541 / SI 661 Natural Language Processing.
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
School of something FACULTY OF OTHER School of Computing FACULTY OF ENGINEERING Word Bi-grams and PoS Tags COMP3310 Natural Language Processing Eric Atwell,
Word classes and part of speech tagging Chapter 5.
CSA3202 Human Language Technology HMMs for POS Tagging.
Estimating N-gram Probabilities Language Modeling.
For Friday Finish chapter 23 Homework –Chapter 23, exercise 15.
Modified from Diane Litman's version of Steve Bird's notes 1 Rule-Based Tagger The Linguistic Complaint –Where is the linguistic knowledge of a tagger?
CSA2050: Introduction to Computational Linguistics Part of Speech (POS) Tagging II Transformation Based Tagging Brill (1995)
2/29/2016CPSC503 Winter CPSC 503 Computational Linguistics Lecture 5 Giuseppe Carenini.
Maximum Entropy techniques for exploiting syntactic, semantic and collocational dependencies in Language Modeling Sanjeev Khudanpur, Jun Wu Center for.
Discriminative n-gram language modeling Brian Roark, Murat Saraclar, Michael Collins Presented by Patty Liu.
Statistical Methods for NLP Diana Trandab ă ț
Part-Of-Speech Tagging Radhika Mamidi. POS tagging Tagging means automatic assignment of descriptors, or tags, to input tokens. Example: “Computational.
N-gram Models Computational Linguistic Course
N-Grams and Corpus Linguistics
N-Grams and Corpus Linguistics
Lecture 6: Part of Speech Tagging (II): October 14, 2004 Neal Snider
Lecture 13 Corpus Linguistics I CS 4705.
Presentation transcript:

1 SIMS 290-2: Applied Natural Language Processing Marti Hearst Sept 15, 2004

2 Class Pace and Schedule Need a foundation before you can do anything interesting. Tokenizing, Tagging, Regex’s Text Classification Principles and Techniques Training vs. Testing, processing corpora Through (approximately) the 6 th week, keep doing exercises from the NLTK tutorials to build that foundation. 2 more homeworks I’m trying to make them bite-sized pieces 7 th – 10 th Group Miniproject on Enron Corpus Will involve classification or Information Extraction Different groups will do different things May have a homework within this timeframe 11 th – 15 th Another Miniproject Either on Enron project or your choices I will suggest ideas; you can propose them too May also have 1-2 other homeworks in this timeframe

3 Language Modeling An fundamental concept in NLP Main idea: For a given language, some words are more likely than others to follow each other, or You can predict (with some degree of accuracy) the probability that a given word will follow another word. Illustration: Distributions of words in class-participation exercise.

4 Adapted from slide by Bonnie Dorr Next Word Prediction From a NY Times story... Stocks... Stocks plunged this …. Stocks plunged this morning, despite a cut in interest rates Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall... Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began

5 Adapted from slide by Bonnie Dorr Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last … Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.

6 Adapted from slide by Bonnie Dorr Human Word Prediction Clearly, at least some of us have the ability to predict future words in an utterance. How? Domain knowledge Syntactic knowledge Lexical knowledge

7 Adapted from slide by Bonnie Dorr Claim A useful part of the knowledge needed to allow word prediction can be captured using simple statistical techniques In particular, we'll rely on the notion of the probability of a sequence (a phrase, a sentence)

8 Adapted from slide by Bonnie Dorr Applications Why do we want to predict a word, given some preceding words? Rank the likelihood of sequences containing various alternative hypotheses, e.g. for ASR Theatre owners say popcorn/unicorn sales have doubled... Assess the likelihood/goodness of a sentence –for text generation or machine translation. The doctor recommended a cat scan. El doctor recommendó una exploración del gato.

9 Adapted from slide by Bonnie Dorr N-Gram Models of Language Use the previous N-1 words in a sequence to predict the next word Language Model (LM) unigrams, bigrams, trigrams,… How do we train these models? Very large corpora

10 Adapted from slide by Bonnie Dorr Simple N-Grams Assume a language has V word types in its lexicon, how likely is word x to follow word y? Simplest model of word probability: 1/V Alternative 1: estimate likelihood of x occurring in new text based on its general frequency of occurrence estimated from a corpus (unigram probability) popcorn is more likely to occur than unicorn Alternative 2: condition the likelihood of x occurring in the context of previous words (bigrams, trigrams,…) mythical unicorn is more likely than mythical popcorn

11 A Word on Notation P(unicorn) Read this as “The probability of seeing the token unicorn” Unigram tagger uses this. P(unicorn|mythical) Called the Conditional Probability. Read this as “The probability of seeing the token unicorn given that you’ve seen the token mythical Bigram tagger uses this. Related to the conditional frequency distributions that we’ve been working with.

12 Adapted from slide by Bonnie Dorr Computing the Probability of a Word Sequence Compute the product of component conditional probabilities? P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical) The longer the sequence, the less likely we are to find it in a training corpus P(Most biologists and folklore specialists believe that in fact the mythical unicorn horns derived from the narwhal) Solution: approximate using n-grams

13 Adapted from slide by Bonnie Dorr Bigram Model Approximate by P(unicorn|the mythical) by P(unicorn|mythical) Markov assumption: The probability of a word depends only on the probability of a limited history Generalization: The probability of a word depends only on the probability of the n previous words –trigrams, 4-grams, … –the higher n is, the more data needed to train –backoff models

14 Adapted from slide by Bonnie Dorr Using N-Grams For N-gram models P(w n-1,w n ) = P(w n | w n-1 ) P(w n-1 ) By the Chain Rule we can decompose a joint probability, e.g. P(w 1,w 2,w 3 ) P(w 1,w 2,...,w n ) = P(w 1 |w 2,w 3,...,w n ) P(w 2 |w 3,...,w n ) … P(w n-1 |w n ) P(w n ) For bigrams then, the probability of a sequence is just the product of the conditional probabilities of its bigrams P(the,mythical,unicorn) = P(unicorn|mythical)P(mythical|the) P(the| )

15 Adapted from slide by Bonnie Dorr Training and Testing N-Gram probabilities come from a training corpus overly narrow corpus: probabilities don't generalize overly general corpus: probabilities don't reflect task or domain A separate test corpus is used to evaluate the model, typically using standard metrics held out test set; development test set cross validation results tested for statistical significance

16 Adapted from slide by Bonnie Dorr A Simple Example From BeRP: The Berkeley Restaurant Project A testbed for a Speech Recognition project System prompts user for information in order to fill in slots in a restaurant database. –Type of food, hours open, how expensive After getting lots of input, can compute how likely it is that someone will say X given that they already said Y. P(I want to each Chinese food) = P(I | ) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)

17 Adapted from slide by Bonnie Dorr A Bigram Grammar Fragment from BeRP.001Eat British.03Eat today.007Eat dessert.04Eat Indian.01Eat tomorrow.04Eat a.02Eat Mexican.04Eat at.02Eat Chinese.05Eat dinner.02Eat in.06Eat lunch.03Eat breakfast.06Eat some.03Eat Thai.16Eat on

18 Adapted from slide by Bonnie Dorr.01British lunch.05Want a.01British cuisine.65Want to.15British restaurant.04I have.60British food.08I don’t.02To be.29I would.09To spend.32I want.14To have.02 I’m.26To eat.04 Tell.01Want Thai.06 I’d.04Want some.25 I

19 Adapted from slide by Bonnie Dorr P(I want to eat British food) = P(I| ) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) =.25*.32*.65*.26*.001*.60 = vs. I want to eat Chinese food = Probabilities seem to capture “syntactic'' facts, “world knowledge'' eat is often followed by an NP British food is not too popular N-gram models can be trained by counting and normalization

20 Adapted from slide by Bonnie Dorr What do we learn about the language? What's being captured with... P(want | I) =.32 P(to | want) =.65 P(eat | to) =.26 P(food | Chinese) =.56 P(lunch | eat) =.055 What about... P(I | I) =.0023 P(I | want) =.0025 P(I | food) =.013

21 Modified from Massio Poesio's lecture Tagging with lexical frequencies Secretariat/NNP is/VBZ expected/VBN to/TO race/VB tomorrow/NN People/NNS continue/VBP to/TO inquire/VB the/DT reason/NN for/IN the/DT race/NN for/IN outer/JJ space/NN Problem: assign a tag to race given its lexical frequency Solution: we choose the tag that has the greater P(race|VB) Probability of “race” given “VB” on prior word P(race|NN) Probability of “race” given “NN” on prior word Actual estimate from the Switchboard corpus: P(race|NN) = P(race|VB) =.00003

22 Modified from Diane Litman's version of Steve Bird's notes Combining Taggers Use more accurate algorithms when we can, backoff to wider coverage when needed. Try tagging the token with the 1st order tagger. If the 1st order tagger is unable to find a tag for the token, try finding a tag with the 0th order tagger. If the 0th order tagger is also unable to find a tag, use the NN_CD_Tagger to find a tag.

23 Modified from Diane Litman's version of Steve Bird's notes BackoffTagger class >>> train_toks = TaggedTokenizer().tokenize(tagged_text_str) # Construct the taggers >>> tagger1 = NthOrderTagger(1, SUBTOKENS=‘WORDS’) >>> tagger2 = UnigramTagger() # 0th order >>> tagger3 = NN_CD_Tagger() # Train the taggers >>> for tok in train_toks: tagger1.train(tok) tagger2.train(tok)

24 Modified from Diane Litman's version of Steve Bird's notes Backoff (continued) # Combine the taggers (in order, by specificity) > tagger = BackoffTagger([tagger1, tagger2, tagger3]) # Use the combined tagger > accuracy = tagger_accuracy(tagger, unseen_tokens)

25 Modified from Diane Litman's version of Steve Bird's notes Rule-Based Tagger The Linguistic Complaint Where is the linguistic knowledge of a tagger? Just a massive table of numbers Aren’t there any linguistic insights that could emerge from the data? Could thus use handcrafted sets of rules to tag input sentences, for example, if input follows a determiner tag it as a noun.

26 Slide modified from Massimo Poesio's The Brill tagger An example of TRANSFORMATION-BASED LEARNING Very popular (freely available, works fairly well) A SUPERVISED method: requires a tagged corpus Basic idea: do a quick job first (using frequency), then revise it using contextual rules

27 Brill Tagging: In more detail Start with simple (less accurate) rules…learn better ones from tagged corpus Tag each word initially with most likely POS Examine set of transformations to see which improves tagging decisions compared to tagged corpus Re-tag corpus using best transformation Repeat until, e.g., performance doesn’t improve Result: tagging procedure (ordered list of transformations) which can be applied to new, untagged text

28 Slide modified from Massimo Poesio's An example Examples: They are expected to race tomorrow. The race for outer space. Tagging algorithm: 1.Tag all uses of “race” as NN (most likely tag in the Brown corpus) They are expected to race/NN tomorrow the race/NN for outer space 2.Use a transformation rule to replace the tag NN with VB for all uses of “race” preceded by the tag TO: They are expected to race/VB tomorrow the race/NN for outer space

29 First 20 Transformation Rules From: Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging Eric Brill. Computational Linguistics. December, 1995.

30 Transformation Rules for Tagging Unknown Words From: Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part of Speech Tagging Eric Brill. Computational Linguistics. December, 1995.

31 Adapted from Massio Peosio's Additional issues Most of the difference in performance between POS algorithms depends on their treatment of UNKNOWN WORDS Class-based N-grams

32 Modified from Diane Litman's version of Steve Bird's notes Evaluating a Tagger Tagged tokens – the original data Untag (exclude) the data Tag the data with your own tagger Compare the original and new tags Iterate over the two lists checking for identity and counting Accuracy = fraction correct

33 Assessing the Errors Why the tuple method? Dictionaries cannot be indexed by lists, so convert lists to tuples. exclude returns a new token containing only the properties that are not named in the given list.

34 Assessing the Errors

35 Upcoming First assignment due 8pm tonight Turn in on course Assignments page For next week: Read the Chunking tutorial. (The pdf version has the missing images) We’ll have an assignment getting practice with this.