Download presentation
Presentation is loading. Please wait.
Published byJoleen Ramsey Modified over 9 years ago
1
CSE 446: Naïve Bayes Winter 2012 Dan Weld Some slides from Carlos Guestrin, Luke Zettlemoyer & Dan Klein
2
Today Gaussians Naïve Bayes Text Classification 2
3
Long Ago Random variables, distributions Marginal, joint & conditional probabilities Sum rule, product rule, Bayes rule Independence, conditional independence 3
4
Last Time PriorHypothesis Maximum Likelihood Estimate Maximum A Posteriori Estimate Bayesian Estimate UniformThe most likely AnyThe most likely Any Weighted combination
5
Bayesian Learning Use Bayes rule: Or equivalently: Prior Normalization Data Likelihood Posterior
6
Conjugate Priors? 6
7
Those Pesky Distributions 7 DiscreteContinuous Binary {0, 1}M Values Single Event Sequence (N trials) N= H + T Bernouilli BinomialMultinomial BetaDirichlet Conjugate Prior
8
What about continuous variables? Billionaire says: If I am measuring a continuous variable, what can you do for me? You say: Let me tell you about Gaussians…
9
Some properties of Gaussians Affine transformation (multiplying by scalar and adding a constant) are Gaussian – X ~ N( , 2 ) – Y = aX + b Y ~ N(a +b,a 2 2 ) Sum of Gaussians is Gaussian – X ~ N( X, 2 X ) – Y ~ N( Y, 2 Y ) – Z = X+Y Z ~ N( X + Y, 2 X + 2 Y ) Can make easy to differentiate, as we will see soon!
10
Learning a Gaussian Collect a bunch of data – Hopefully, i.i.d. samples – e.g., exam scores Learn parameters – Mean: μ – Variance: σ X i i = Exam Score 085 195 2100 312 …… 9989
11
MLE for Gaussian: Prob. of i.i.d. samples D={x 1,…,x N }: Log-likelihood of data:
12
Your second learning algorithm: MLE for mean of a Gaussian What’s MLE for mean?
13
MLE for variance Again, set derivative to zero:
14
Learning Gaussian parameters MLE:
15
Learning Gaussian parameters MLE: BTW. MLE for the variance of a Gaussian is biased – Expected result of estimation is not true parameter! – Unbiased variance estimator:
16
Bayesian learning of Gaussian parameters Conjugate priors – Mean: Gaussian prior – Variance: Wishart Distribution
17
Bayesian learning of Gaussian parameters Conjugate priors – Mean: Gaussian prior – Variance: Wishart Distribution Prior for mean:
18
MAP for mean of Gaussian
19
Supervised Learning of Classifiers Find f Given: Training set {(x i, y i ) | i = 1 … n } Find: A good approximation to f : X Y Examples: what are X and Y ? Spam Detection –Map email to {Spam,Ham} Digit recognition –Map pixels to {0,1,2,3,4,5,6,7,8,9} Stock Prediction –Map new, historic prices, etc. to (the real numbers) Classification
20
20 Bayesian Categorization Let set of categories be {c 1, c 2,…c n } Let E be description of an instance. Determine category of E by determining for each c i P(E) can be ignored since is factor categories
21
Text classification Classify e-mails – Y = {Spam,NotSpam} Classify news articles – Y = {what is the topic of the article?} Classify webpages – Y = {Student, professor, project, …} What to use for features, X?
22
Example: Spam Filter Input: email Output: spam/ham Setup: –Get a large collection of example emails, each labeled “spam” or “ham” –Note: someone has to hand label all this data! –Want to learn to predict labels of new, future emails Features: The attributes used to make the ham / spam decision –Words: FREE! –Text Patterns: $dd, CAPS –For email specifically, Semantic features: SenderInContacts –…–… Dear Sir. First, I must solicit your confidence in this transaction, this is by virture of its nature as being utterly confidencial and top secret. … TO BE REMOVED FROM FUTURE MAILINGS, SIMPLY REPLY TO THIS MESSAGE AND PUT "REMOVE" IN THE SUBJECT. 99 MILLION EMAIL ADDRESSES FOR ONLY $99 Ok, Iknow this is blatantly OT but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use, I know it was working pre being stuck in the corner, but when I plugged it in, hit the power nothing happened.
23
Features X are word sequence in document X i for i th word in article
24
Features for Text Classification X is sequence of words in document X (and hence P(X|Y)) is huge!!! – Article at least 1000 words, X={X 1,…,X 1000 } – X i represents i th word in document, i.e., the domain of X i is entire vocabulary, e.g., Webster Dictionary (or more), 10,000 words, etc. 10,000 1000 = 10 4000 Atoms in Universe: 10 80 – We may have a problem…
25
Bag of Words Model Typical additional assumption – – Position in document doesn’t matter: P(X i =x i |Y=y) = P(X k =x i |Y=y) (all positions have the same distribution) – Ignore the order of words When the lecture is over, remember to wake up the person sitting next to you in the lecture room.
26
Bag of Words Model in is lecture lecture next over person remember room sitting the the the to to up wake when you Typical additional assumption – – Position in document doesn’t matter: P(X i =x i |Y=y) = P(X k =x i |Y=y) (all position have the same distribution) – Ignore the order of words – Sounds really silly, but often works very well! – From now on: X i = Boolean: “word i is in document” X = X 1 … X n
27
Bag of Words Approach aardvark0 about2 all2 Africa1 apple0 anxious0... gas1... oil1 … Zaire0
28
28 Bayesian Categorization Need to know: – Priors: P(y i ) – Conditionals: P(X | y i ) P(y i ) are easily estimated from data. – If n i of the examples in D are in y i, then P(y i ) = n i / |D| Conditionals: – X = X 1 … X n – Estimate P(X 1 … X n | y i ) Too many possible instances to estimate! – (exponential in n) – Even with bag of words assumption! Problem! P(y 1 | X) ~ P(y i )P(X|y i )
29
Need to Simplify Somehow Too many probabilities – P(x 1 x 2 x 3 | y i ) Can we assume some are the same? – P(x 1 x 2 ) = P(x 1 )P(x 2 ) 29 P(x 1 x 2 x 3 | spam) P(x 1 x 2 x 3 | spam) P(x 1 x 2 x 3 | spam) …. P( x 1 x 2 x 3 | spam) ?
30
Conditional Independence X is conditionally independent of Y given Z, if the probability distribution for X is independent of the value of Y, given the value of Z e.g., Equivalent to:
31
Naïve Bayes Naïve Bayes assumption: – Features are independent given class: – More generally: How many parameters now? Suppose X is composed of n binary features
32
The Naïve Bayes Classifier Given: – Prior P(Y) – n conditionally independent features X given the class Y – For each X i, we have likelihood P(X i |Y) Decision rule: Y X1X1 XnXn X2X2
33
MLE for the parameters of NB Given dataset, count occurrences MLE for discrete NB, simply: – Prior: – Likelihood:
34
Subtleties of NB Classifier 1 Violating the NB Assumption Usually, features are not conditionally independent: Actual probabilities P(Y|X) often biased towards 0 or 1 Nonetheless, NB is the single most used classifier out there – NB often performs well, even when assumption is violated – [Domingos & Pazzani ’96] discuss some conditions for good performance
35
Subtleties of NB classifier 2 : Overfitting
36
2 wins!!
37
For Binary Features: We already know the answer! MAP: use most likely parameter Beta prior equivalent to extra observations for each feature As N → 1, prior is “forgotten” But, for small sample size, prior is important!
38
That’s Great for Bionomial Works for Spam / Ham What about multiple classes – Eg, given a wikipedia page, predicting type 38
39
Multinomials: Laplace Smoothing Laplace’s estimate: –Pretend you saw every outcome k extra times –What’s Laplace with k = 0? –k is the strength of the prior –Can derive this as a MAP estimate for multinomial with Dirichlet priors Laplace for conditionals: –Smooth each condition independently: HHT
40
40 Naïve Bayes for Text Modeled as generating a bag of words for a document in a given category by repeatedly sampling with replacement from a vocabulary V = {w 1, w 2,…w m } based on the probabilities P(w j | c i ). Smooth probability estimates with Laplace m-estimates assuming a uniform distribution over all words (p = 1/|V|) and m = |V| – Equivalent to a virtual sample of seeing each word in each category exactly once.
41
41 Easy to Implement But… If you do… it probably won’t work…
42
Probabilities: Important Detail! Any more potential problems here? P(spam | X 1 … X n ) = P(spam | X i ) i We are multiplying lots of small numbers Danger of underflow! 0.5 57 = 7 E -18 Solution? Use logs and add! p 1 * p 2 = e log(p1)+log(p2) Always keep in log form
43
43 Naïve Bayes Posterior Probabilities Classification results of naïve Bayes – I.e. the class with maximum posterior probability… – Usually fairly accurate (?!?!?) However, due to the inadequacy of the conditional independence assumption… – Actual posterior-probability estimates not accurate. – Output probabilities generally very close to 0 or 1.
44
Bag of words model in is lecture lecture next over person remember room sitting the the the to to up wake when you Typical additional assumption – – Position in document doesn’t matter: P(X i =x i |Y=y) = P(X k =x i |Y=y) (all position have the same distribution) – “Bag of words” model – order of words on the page ignored – Sounds really silly, but often works very well!
45
NB with Bag of Words for text classification Learning phase: – Prior P(Y) Count how many documents from each topic (prior) – P(X i |Y) For each topic, count how many times you saw word in documents of this topic (+ prior); remember this dist’n is shared across all positions i Test phase: – For each document Use naïve Bayes decision rule
46
Twenty News Groups results
47
Learning curve for Twenty News Groups
48
Example: Digit Recognition Input: images / pixel grids Output: a digit 0-9 Setup: –Get a large collection of example images, each labeled with a digit –Note: someone has to hand label all this data! –Want to learn to predict labels of new, future digit images Features: The attributes used to make the digit decision –Pixels: (6,8)=ON –Shape Patterns: NumComponents, AspectRatio, NumLoops –… 0 1 2 1 ??
49
A Digit Recognizer Input: pixel grids Output: a digit 0-9
50
Naïve Bayes for Digits (Binary Inputs) Simple version: –One feature F ij for each grid position –Possible feature values are on / off, based on whether intensity is more or less than 0.5 in underlying image –Each input maps to a feature vector, e.g. –Here: lots of features, each is binary valued Naïve Bayes model: Are the features independent given class? What do we need to learn?
51
Example Distributions 10.1 2 3 4 5 6 7 8 9 0 10.01 20.05 3 40.30 50.80 60.90 70.05 80.60 90.50 00.80 10.05 20.01 30.90 40.80 50.90 6 70.25 80.85 90.60 00.80
52
What if we have continuous X i ? Eg., character recognition: X i is i th pixel Gaussian Naïve Bayes (GNB): Sometimes assume variance is independent of Y (i.e., i ), or independent of X i (i.e., k ) or both (i.e., )
53
Estimating Parameters: Y discrete, X i continuous Maximum likelihood estimates: Mean: Variance: jth training example x if x true, else
54
Example: GNB for classifying mental states ~1 mm resolution ~2 images per sec. 15,000 voxels/image non-invasive, safe measures Blood Oxygen Level Dependent (BOLD) response Typical impulse response 10 sec [Mitchell et al.]
55
Brain scans can track activation with precision and sensitivity [Mitchell et al.]
56
Gaussian Naïve Bayes: Learned voxel,word P(BrainActivity | WordCategory = {People,Animal}) [Mitchell et al.]
57
Learned Bayes Models – Means for P(BrainActivity | WordCategory) Animal words People words Pairwise classification accuracy: 85% [Mitchell et al.]
58
58
59
Bayes Classifier is Optimal! Learn: h : X Y – X – features – Y – target classes Suppose: you know true P(Y|X): – Bayes classifier: Why?
60
Optimal classification Theorem: – Bayes classifier h Bayes is optimal! – Why?
61
What you need to know about Naïve Bayes Naïve Bayes classifier – What’s the assumption – Why we use it – How do we learn it – Why is Bayesian estimation important Text classification – Bag of words model Gaussian NB – Features are still conditionally independent – Each feature has a Gaussian distribution given class Optimal decision using Bayes Classifier
62
62 Text Naïve Bayes Algorithm (Train) Let V be the vocabulary of all words in the documents in D For each category c i C Let D i be the subset of documents in D in category c i P(c i ) = |D i | / |D| Let T i be the concatenation of all the documents in D i Let n i be the total number of word occurrences in T i For each word w j V Let n ij be the number of occurrences of w j in T i Let P(w i | c i ) = (n ij + 1) / (n i + |V|)
63
63 Text Naïve Bayes Algorithm (Test) Given a test document X Let n be the number of word occurrences in X Return the category: where a i is the word occurring the ith position in X
64
64 Naïve Bayes Time Complexity Training Time: O(|D|L d + |C||V|)) where L d is the average length of a document in D. – Assumes V and all D i, n i, and n ij pre-computed in O(|D|L d ) time during one pass through all of the data. – Generally just O(|D|L d ) since usually |C||V| < |D|L d Test Time: O(|C| L t ) where L t is the average length of a test document. Very efficient overall, linearly proportional to the time needed to just read in all the data.
65
Multi-Class Categorization Pick the category with max probability Create many 1 vs other classifiers Use a hierarchical approach (wherever hierarchy available) Entity Person Location Scientist Artist City County Country 65
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.