Download presentation
Presentation is loading. Please wait.
1
1 I256: Applied Natural Language Processing Marti Hearst Oct 9, 2006
2
2 Today Finish Conditional Probabilities and Bayesian Learning Intro to Classification; Identification of Language Author
3
3 Slide adapted from Dan Jurafsky's Conditional Probability A way to reason about the outcome of an experiment based on partial information In a word guessing game the first letter for the word is a “t”. What is the likelihood that the second letter is an “h”? How likely is it that a person has a disease given that a medical test was negative? A spot shows up on a radar screen. How likely is it that it corresponds to an aircraft?
4
4 Slides adapted from Mary Ellen Califf Conditional Probability Conditional probability specifies the probability given that the values of some other random variables are known. P(Sneeze | Cold) = 0.8 P(Cold | Sneeze) = 0.6 The probability of a sneeze given a cold is 80%. The probability of a cold given a sneeze is 60%.
5
5 Slide adapted from Dan Jurafsky's More precisely Given an experiment, a corresponding sample space S, and the probability law Suppose we know that the outcome is within some given event B The first letter was ‘t’ We want to quantify the likelihood that the outcome also belongs to some other given event A. The second letter will be ‘h’ We need a new probability law that gives us the conditional probability of A given B P(A|B) “the probability of A given B”
6
6 Slides adapted from Mary Ellen Califf Joint Probability Distribution The joint probability distribution for a set of random variables X 1 …X n gives the probability of every combination of values P(X 1,...,X n ) Sneeze ¬Sneeze Cold 0.08 0.01 ¬Cold 0.01 0.9 The probability of all possible cases can be calculated by summing the appropriate subset of values from the joint distribution. All conditional probabilities can therefore also be calculated P(Cold | ¬Sneeze)
7
7 Slide adapted from Dan Jurafsky's An intuition Let’s say A is “it’s raining”. Let’s say P(A) in dry California is.01 Let’s say B is “it was sunny ten minutes ago” P(A|B) means “what is the probability of it raining now if it was sunny 10 minutes ago” P(A|B) is probably way less than P(A) Perhaps P(A|B) is.0001 Intuition: The knowledge about B should change our estimate of the probability of A.
8
8 Slide adapted from Dan Jurafsky's Conditional Probability Let A and B be events P(A,B) and P(A B) both means “the probability that BOTH A and B occur” p(B|A) = the probability of event B occurring given event A occurs definition: p(A|B) = p(A B) / p(B) P(A, B) = P(A|B) * P(B) (simple arithmetic) P(A, B) = P(B, A)
9
9 Bayes Theorem We start with conditional probability definition: So say we know how to compute P(A|B). What if we want to figure out P(B|A)? We can re-arrange the formula using Bayes Theorem:
10
10 Slide adapted from Dan Jurafsky's Deriving Bayes Rule
11
11 Slides adapted from Mary Ellen Califf How to compute probilities? We don’t have the probabilities for most NLP problems We can try to estimate them from data (that’s the learning part) Usually we can’t actually estimate the probability that something belongs to a given class given the information about it BUT we can estimate the probability that something in a given class has particular values.
12
12 Slides adapted from Mary Ellen Califf Simple Bayesian Reasoning If we assume there are n possible disjoint tags, t 1 … t n P(t i | w) = P(w | t i ) P(t i ) P(w) Want to know the probability of the tag given the word. P(w| t i ) = number of times we see this tag with this word divided by how often we see the tag P(w| t i ) = Sum(word with tag i) / (count of tag i in corpus) P( t i ) = Sum(count of tag i in corpus) / (count of all tags) P(w) = Sum(count of word w in corpus) / (count of all words)
13
13 Some notation P(f i | Sentence ) This means that you multiple all the features together P(f1| S) * P(f2 | S) * … * P(fn | S) There is a similar one for summation.
14
14 Naïve Bayes Classifier The simpler version of Bayes was: P(B|A) = P(A|B)P(B) P(Sentence | feature) = P(feature | S) P(S) Using Naïve Bayes, we expand the number of feaures by defining a joint probability distribution: P(Sentence, f 1, f 2, … f n ) = P(Sentence) P(f i | Sentence ) We learn P(Sentence) and P(f i | Sentence) in training Test: we need to state P(Sentence | f 1, f 2, … f n ) P(Sentence| f 1, f 2, … f n ) = P(Sentence, f 1, f 2, … f n ) / P(f 1, f 2, … f n )
15
15 Slides adapted from Mary Ellen Califf Bayes Independence Example If there are many kinds of evidence, we need to combine them By assuming independence, we ignore the possible interactions: Imagine there are diagnoses ALLERGY, COLD, and WELL Symptoms SNEEZE, COUGH, and FEVER Prob Well Cold Allergy P(d) 0.9 0.05 0.05 P(sneeze|d) 0.1 0.9 0.9 P(cough | d) 0.1 0.8 0.7 P(fever | d) 0.01 0.7 0.4
16
16 Slides adapted from Mary Ellen Califf If symptoms are: sneeze & cough & no fever: P(well | s, c, not(f)) = P(e | well) P(well) / P (e) = (P(s | well) * P (c | well) * 1 - P(f|well)) * P(well) / P(e) = (0.1)(0.1)(0.99)(0.9)/P(e) = 0.0089/P(e) P(cold | e) = (.05)(0.9)(0.8)(0.3)/P(e) = 0.01/P(e) P(allergy | e) = (.05)(0.9)(0.7)(0.6)/P(e) = 0.019/P(e) P(e) =.0089 +.01 +.019 =.0379 P(well | e) =.23 P(cold | e) =.26 P(allergy | e) =.50 Diagnosis: allergy Bayes Independence Example
17
17 Kupiec et al. Feature Representation Fixed-phrase feature Certain phrases indicate summary, e.g. “in summary” Paragraph feature Paragraph initial/final more likely to be important. Thematic word feature Repetition is an indicator of importance Uppercase word feature Uppercase often indicates named entities. (Taylor) Sentence length cut-off Summary sentence should be > 5 words.
18
18 Details: Bayesian Classifier Assuming statistical independence: Probability that sentence s is included in summary S, given that sentence’s feature value pairs Probability of feature-value pair occurring in a source sentence which is also in the summary compression rate Probability of feature-value pair occurring in a source sentence
19
19 Language Identification
20
20 Language identification Tutti gli esseri umani nascono liberi ed eguali in dignità e diritti. Essi sono dotati di ragione e di coscienza e devono agire gli uni verso gli altri in spirito di fratellanza. Alle Menschen sind frei und gleich an Würde und Rechten geboren. Sie sind mit Vernunft und Gewissen begabt und sollen einander im Geist der Brüderlichkeit begegnen. Universal Declaration of Human Rights, UN, in 363 languages http://www.unhchr.ch/udhr/navigate/alpha.htm
21
21 Language identification égaux eguali iguales edistämään Ü ¿ How to do determine, for a stretch of text, which language it is from?
22
22 Language Identification Turns out to be really simple Just a few character bigrams can do it (Sibun & Reynar 96) Used Kullback Leibler distance (relative entropy) Compare probability distribution of the test set to those for the languages trained on Smallest distance determines the language Using special character sets helps a bit, but barely
23
23 Language Identification (Sibun & Reynar 96)
24
24 Confusion Matrix A table that shows, for each class, which ones your algorithm got right and which wrong Algorithm’s guess Gold standard
25
25
26
26 Author Identification (Stylometry)
27
27 Author Identification Also called Stylometry in the humanities An example of a Classification Problem Classifiers: Decide which of N buckets to put an item in (Some classifiers allow for multiple buckets)
28
28 The Disputed Federalist Papers In 1787-1788, Jay, Madison, and Hamilton wrote a series of anonymous essays to convince the voters of New York to ratify the new U. S. Constitution. Scholars have consensus that: 5 authored by Jay 51 authored by Hamilton 14 authored by Madison 3 jointly by Hamilton and Madison 12 remain in dispute … Hamilton or Madison?
29
29 Author identification Federalist papers In 1963 Mosteller and Wallace solved the problem They identified function words as good candidates for authorships analysis Using statistical inference they concluded the author was Madison Since then, other statistical techniques have supported this conclusion.
30
30 Function vs. Content Words High rates for “by” favor M, low favor H High rates for “from” favor M, low says little High rats for “to” favor H, low favor M
31
31 Function vs. Content Words No consistent pattern for “war”
32
32 Federalist Papers Problem Fung, The Disputed Federalist Papers: SVM Feature Selection Via Concave Minimization, ACM TAPIA’03
33
33 Discussion Can Pseudonymity Really Guarantee Privacy? Rao and Rohatgi, 2000
34
34 Next Time Guest lecture by Elizabeth Charnock and Steve Roberts of Cataphora
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.