Copyright © Andrew W. Moore Slide 1 Probabilistic and Bayesian Analytics Andrew W. Moore Professor School of Computer Science Carnegie Mellon University.

Slides:



Advertisements
Similar presentations
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Advertisements

Lecture 5 Bayesian Learning
Uncertainty Everyday reasoning and decision making is based on uncertain evidence and inferences. Classical logic only allows conclusions to be strictly.
A gentle introduction to the mathematics of biosurveillance: Bayes Rule and Bayes Classifiers Associate Member The RODS Lab University of Pittburgh Carnegie.
5/17/20151 Probabilistic Reasoning CIS 479/579 Bruce R. Maxim UM-Dearborn.
PROBABILISTIC MODELS David Kauchak CS451 – Fall 2013.
KETIDAKPASTIAN (UNCERTAINTY) Yeni Herdiyeni – “Uncertainty is defined as the lack of the exact knowledge that would enable.
What is Statistical Modeling
Copyright © 2005 Brooks/Cole, a division of Thomson Learning, Inc. 6.1 Chapter Six Probability.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Slide 1 Aug 25th, 2001 Copyright © 2001, Andrew W. Moore Probabilistic Machine Learning Brigham S. Anderson School of Computer Science Carnegie Mellon.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 12 Jim Martin.
Copyright © 2004, Andrew W. Moore Naïve Bayes Classifiers Andrew W. Moore Professor School of Computer Science Carnegie Mellon University
Special Topic: Missing Values. Missing Values Common in Real Data  Pneumonia: –6.3% of attribute values are missing –one attribute is missing in 61%
Thanks to Nir Friedman, HU
Bayes Classification.
Probabilistic Prediction Algorithms Jon Radoff Biophysics 101 Fall 2002.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Expected Value (Mean), Variance, Independence Transformations of Random Variables Last Time:
Crash Course on Machine Learning
Recitation 1 Probability Review
1 Naïve Bayes A probabilistic ML algorithm. 2 Axioms of Probability Theory All probabilities between 0 and 1 True proposition has probability 1, false.
CISC 4631 Data Mining Lecture 06: Bayes Theorem Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Eamonn Koegh (UC Riverside)
A [somewhat] Quick Overview of Probability Shannon Quinn CSCI 6900.
Aug 25th, 2001Copyright © 2001, Andrew W. Moore Probabilistic and Bayesian Analytics Andrew W. Moore Associate Professor School of Computer Science Carnegie.
Soft Computing Lecture 17 Introduction to probabilistic reasoning. Bayesian nets. Markov models.
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes March 13, 2012.
Digital Statisticians INST 4200 David J Stucki Spring 2015.
Business Statistics: A First Course, 5e © 2009 Prentice-Hall, Inc. Chap 4-1 Chapter 4 Basic Probability Business Statistics: A First Course 5 th Edition.
Psy B07 Chapter 4Slide 1 SAMPLING DISTRIBUTIONS AND HYPOTHESIS TESTING.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 25 Wednesday, 20 October.
Learning from Observations Chapter 18 Through
1 CS 391L: Machine Learning: Bayesian Learning: Naïve Bayes Raymond J. Mooney University of Texas at Austin.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
27 February 2001What is Confidence?Slide 1 What is Confidence? How to Handle Overfitting When Given Few Examples Top Changwatchai AIML seminar 27 February.
CSE 446: Point Estimation Winter 2012 Dan Weld Slides adapted from Carlos Guestrin (& Luke Zettlemoyer)
Computing & Information Sciences Kansas State University Wednesday, 22 Oct 2008CIS 530 / 730: Artificial Intelligence Lecture 22 of 42 Wednesday, 22 October.
Kansas State University Department of Computing and Information Sciences CIS 730: Introduction to Artificial Intelligence Lecture 25 of 41 Monday, 25 October.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
Uncertainty Uncertain Knowledge Probability Review Bayes’ Theorem Summary.
Mehdi Ghayoumi MSB rm 132 Ofc hr: Thur, a Machine Learning.
Elementary manipulations of probabilities Set probability of multi-valued r.v. P({x=Odd}) = P(1)+P(3)+P(5) = 1/6+1/6+1/6 = ½ Multi-variant distribution:
Uncertainty. Assumptions Inherent in Deductive Logic-based Systems All the assertions we wish to make and use are universally true. Observations of the.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Sep 10th, 2001Copyright © 2001, Andrew W. Moore Learning Gaussian Bayes Classifiers Andrew W. Moore Associate Professor School of Computer Science Carnegie.
A Quick Overview of Probability William W. Cohen Machine Learning
Uncertainty ECE457 Applied Artificial Intelligence Spring 2007 Lecture #8.
Today’s Topics Graded HW1 in Moodle (Testbeds used for grading are linked to class home page) HW2 due (but can still use 5 late days) at 11:55pm tonight.
Sep 6th, 2001Copyright © 2001, 2004, Andrew W. Moore Learning with Maximum Likelihood Andrew W. Moore Professor School of Computer Science Carnegie Mellon.
A [somewhat] Quick Overview of Probability Shannon Quinn CSCI 6900 (with thanks to William Cohen of Carnegie Mellon)
CS 401R: Intro. to Probabilistic Graphical Models Lecture #6: Useful Distributions; Reasoning with Joint Distributions This work is licensed under a Creative.
CSE 473 Uncertainty. © UW CSE AI Faculty 2 Many Techniques Developed Fuzzy Logic Certainty Factors Non-monotonic logic Probability Only one has stood.
Nov 30th, 2001Copyright © 2001, Andrew W. Moore PAC-learning Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon University.
Oct 29th, 2001Copyright © 2001, Andrew W. Moore Bayes Net Structure Learning Andrew W. Moore Associate Professor School of Computer Science Carnegie Mellon.
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
Business Statistics: A First Course, 5e © 2009 Prentice-Hall, Inc. Chap 4-1 Chapter 4 Basic Probability Business Statistics: A First Course 5 th Edition.
Weng-Keen Wong, Oregon State University © Bayesian Networks: A Tutorial Weng-Keen Wong School of Electrical Engineering and Computer Science Oregon.
Chapter 12. Probability Reasoning Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
Nov 20th, 2001Copyright © 2001, Andrew W. Moore VC-dimension for characterizing classifiers Andrew W. Moore Associate Professor School of Computer Science.
AP Statistics From Randomness to Probability Chapter 14.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
PDF, Normal Distribution and Linear Regression
Bayes Rule and Bayes Classifiers
Naive Bayes Classifier
Bayesian Networks: A Tutorial
Data Mining Lecture 11.
Probabilistic and Bayesian Analytics
Honors Statistics From Randomness to Probability
Support Vector Machines
Presentation transcript:

Copyright © Andrew W. Moore Slide 1 Probabilistic and Bayesian Analytics Andrew W. Moore Professor School of Computer Science Carnegie Mellon University Note to other teachers and users of these slides. Andrew would be delighted if you found this source material useful in giving your own lectures. Feel free to use these slides verbatim, or to modify them to fit your own needs. PowerPoint originals are available. If you make use of a significant portion of these slides in your own lecture, please include this message, or the following link to the source repository of Andrew’s tutorials: Comments and corrections gratefully received.

Slide 2 Copyright © Andrew W. Moore Probability The world is a very uncertain place 30 years of Artificial Intelligence and Database research danced around this fact And then a few AI researchers decided to use some ideas from the eighteenth century

Slide 3 Copyright © Andrew W. Moore What we’re going to do We will review the fundamentals of probability. It’s really going to be worth it In this lecture, you’ll see an example of probabilistic analytics in action: Bayes Classifiers

Slide 4 Copyright © Andrew W. Moore Discrete Random Variables A is a Boolean-valued random variable if A denotes an event, and there is some degree of uncertainty as to whether A occurs. Examples A = The US president in 2023 will be male A = You wake up tomorrow with a headache A = You have Ebola

Slide 5 Copyright © Andrew W. Moore Probabilities We write P(A) as “the fraction of possible worlds in which A is true” We could at this point spend 2 hours on the philosophy of this. But we won’t.

Slide 6 Copyright © Andrew W. Moore Visualizing A Event space of all possible worlds Its area is 1 Worlds in which A is False Worlds in which A is true P(A) = Area of reddish oval

Slide 7 Copyright © Andrew W. Moore The Axioms Of Probabi lity

Slide 8 Copyright © Andrew W. Moore The Axioms of Probability 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) Where do these axioms come from? Were they “discovered”? Answers coming up later.

Slide 9 Copyright © Andrew W. Moore Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) The area of A can’t get any smaller than 0 And a zero area would mean no world could ever have A true

Slide 10 Copyright © Andrew W. Moore Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) The area of A can’t get any bigger than 1 And an area of 1 would mean all worlds will have A true

Slide 11 Copyright © Andrew W. Moore Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) A B

Slide 12 Copyright © Andrew W. Moore Interpreting the axioms 0 <= P(A) <= 1 P(True) = 1 P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) A B P(A or B) B P(A and B) Simple addition and subtraction

Slide 13 Copyright © Andrew W. Moore These Axioms are Not to be Trifled With There have been attempts to do different methodologies for uncertainty Fuzzy Logic Three-valued logic Dempster-Shafer Non-monotonic reasoning But the axioms of probability are the only system with this property: If you gamble using them you can’t be unfairly exploited by an opponent using some other system [di Finetti 1931]

Slide 14 Copyright © Andrew W. Moore Theorems from the Axioms 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) From these we can prove: P(not A) = P(~A) = 1-P(A) How?

Slide 15 Copyright © Andrew W. Moore Side Note I am inflicting these proofs on you for two reasons: 1.These kind of manipulations will need to be second nature to you if you use probabilistic analytics in depth 2.Suffering is good for you

Slide 16 Copyright © Andrew W. Moore Another important theorem 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) From these we can prove: P(A) = P(A ^ B) + P(A ^ ~B) How?

Slide 17 Copyright © Andrew W. Moore Multivalued Random Variables Suppose A can take on more than 2 values A is a random variable with arity k if it can take on exactly one value out of {v 1,v 2,.. v k } Thus…

Slide 18 Copyright © Andrew W. Moore An easy fact about Multivalued Random Variables: Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) And assuming that A obeys… It’s easy to prove that

Slide 19 Copyright © Andrew W. Moore An easy fact about Multivalued Random Variables: Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) And assuming that A obeys… It’s easy to prove that And thus we can prove

Slide 20 Copyright © Andrew W. Moore Another fact about Multivalued Random Variables: Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) And assuming that A obeys… It’s easy to prove that

Slide 21 Copyright © Andrew W. Moore Another fact about Multivalued Random Variables: Using the axioms of probability… 0 <= P(A) <= 1, P(True) = 1, P(False) = 0 P(A or B) = P(A) + P(B) - P(A and B) And assuming that A obeys… It’s easy to prove that And thus we can prove

Slide 22 Copyright © Andrew W. Moore Elementary Probability in Pictures P(~A) + P(A) = 1

Slide 23 Copyright © Andrew W. Moore Elementary Probability in Pictures P(B) = P(B ^ A) + P(B ^ ~A)

Slide 24 Copyright © Andrew W. Moore Elementary Probability in Pictures

Slide 25 Copyright © Andrew W. Moore Elementary Probability in Pictures

Slide 26 Copyright © Andrew W. Moore Conditional Probability P(A|B) = Fraction of worlds in which B is true that also have A true F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 “Headaches are rare and flu is rarer, but if you’re coming down with ‘flu there’s a chance you’ll have a headache.”

Slide 27 Copyright © Andrew W. Moore Conditional Probability F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 P(H|F) = Fraction of flu-inflicted worlds in which you have a headache = #worlds with flu and headache #worlds with flu = Area of “H and F” region Area of “F” region = P(H ^ F) P(F)

Slide 28 Copyright © Andrew W. Moore Definition of Conditional Probability P(A ^ B) P(A|B) = P(B) Corollary: The Chain Rule P(A ^ B) = P(A|B) P(B)

Slide 29 Copyright © Andrew W. Moore Probabilistic Inference F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 One day you wake up with a headache. You think: “Drat! 50% of flus are associated with headaches so I must have a chance of coming down with flu” Is this reasoning good?

Slide 30 Copyright © Andrew W. Moore Probabilistic Inference F H H = “Have a headache” F = “Coming down with Flu” P(H) = 1/10 P(F) = 1/40 P(H|F) = 1/2 P(F ^ H) = … P(F|H) = …

Slide 31 Copyright © Andrew W. Moore Another way to understand the intuition Thanks to Jahanzeb Sherwani for contributing this explanation:

Slide 32 Copyright © Andrew W. Moore What we just did… P(A ^ B) P(A|B) P(B) P(B|A) = = P(A) P(A) This is Bayes Rule Bayes, Thomas (1763) An essay towards solving a problem in the doctrine of chances. Philosophical Transactions of the Royal Society of London, 53:

Slide 33 Copyright © Andrew W. Moore Using Bayes Rule to Gamble The “Win” envelope has a dollar and four beads in it $1.00 The “Lose” envelope has three beads and no money Trivial question: someone draws an envelope at random and offers to sell it to you. How much should you pay?

Slide 34 Copyright © Andrew W. Moore Using Bayes Rule to Gamble The “Win” envelope has a dollar and four beads in it $1.00 The “Lose” envelope has three beads and no money Interesting question: before deciding, you are allowed to see one bead drawn from the envelope. Suppose it’s black: How much should you pay? Suppose it’s red: How much should you pay?

Slide 35 Copyright © Andrew W. Moore Calculation… $1.00

Slide 36 Copyright © Andrew W. Moore More General Forms of Bayes Rule

Slide 37 Copyright © Andrew W. Moore More General Forms of Bayes Rule

Slide 38 Copyright © Andrew W. Moore Useful Easy-to-prove facts

Slide 39 Copyright © Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: Example: Boolean variables A, B, C

Slide 40 Copyright © Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1.Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). Example: Boolean variables A, B, C ABC

Slide 41 Copyright © Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1.Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). 2.For each combination of values, say how probable it is. Example: Boolean variables A, B, C ABCProb

Slide 42 Copyright © Andrew W. Moore The Joint Distribution Recipe for making a joint distribution of M variables: 1.Make a truth table listing all combinations of values of your variables (if there are M Boolean variables then the table will have 2 M rows). 2.For each combination of values, say how probable it is. 3.If you subscribe to the axioms of probability, those numbers must sum to 1. Example: Boolean variables A, B, C ABCProb A B C

Slide 43 Copyright © Andrew W. Moore Using the Joint One you have the JD you can ask for the probability of any logical expression involving your attribute

Slide 44 Copyright © Andrew W. Moore Using the Joint P(Poor Male) =

Slide 45 Copyright © Andrew W. Moore Using the Joint P(Poor) =

Slide 46 Copyright © Andrew W. Moore Inference with the Joint

Slide 47 Copyright © Andrew W. Moore Inference with the Joint P(Male | Poor) = / = 0.612

Slide 48 Copyright © Andrew W. Moore Inference is a big deal I’ve got this evidence. What’s the chance that this conclusion is true? I’ve got a sore neck: how likely am I to have meningitis? I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?

Slide 49 Copyright © Andrew W. Moore Inference is a big deal I’ve got this evidence. What’s the chance that this conclusion is true? I’ve got a sore neck: how likely am I to have meningitis? I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep?

Slide 50 Copyright © Andrew W. Moore Inference is a big deal I’ve got this evidence. What’s the chance that this conclusion is true? I’ve got a sore neck: how likely am I to have meningitis? I see my lights are out and it’s 9pm. What’s the chance my spouse is already asleep? There’s a thriving set of industries growing based around Bayesian Inference. Highlights are: Medicine, Pharma, Help Desk Support, Engine Fault Diagnosis

Slide 51 Copyright © Andrew W. Moore Where do Joint Distributions come from? Idea One: Expert Humans Idea Two: Simpler probabilistic facts and some algebra Example: Suppose you knew P(A) = 0.7 P(B|A) = 0.2 P(B|~A) = 0.1 P(C|A^B) = 0.1 P(C|A^~B) = 0.8 P(C|~A^B) = 0.3 P(C|~A^~B) = 0.1 Then you can automatically compute the JD using the chain rule P(A=x ^ B=y ^ C=z) = P(C=z|A=x^ B=y) P(B=y|A=x) P(A=x) In another lecture: Bayes Nets, a systematic way to do this.

Slide 52 Copyright © Andrew W. Moore Where do Joint Distributions come from? Idea Three: Learn them from data! Prepare to see one of the most impressive learning algorithms you’ll come across in the entire course….

Slide 53 Copyright © Andrew W. Moore Learning a joint distribution Build a JD table for your attributes in which the probabilities are unspecified The fill in each row with ABCProb 000? 001? 010? 011? 100? 101? 110? 111? ABC Fraction of all records in which A and B are True but C is False

Slide 54 Copyright © Andrew W. Moore Example of Learning a Joint This Joint was obtained by learning from three attributes in the UCI “Adult” Census Database [Kohavi 1995]

Slide 55 Copyright © Andrew W. Moore Where are we? We have recalled the fundamentals of probability We have become content with what JDs are and how to use them And we even know how to learn JDs from data.

Slide 56 Copyright © Andrew W. Moore Density Estimation Our Joint Distribution learner is our first example of something called Density Estimation A Density Estimator learns a mapping from a set of attributes to a Probability Density Estimator Probability Input Attributes

Slide 57 Copyright © Andrew W. Moore Density Estimation Compare it against the two other major kinds of models: Regressor Prediction of real-valued output Input Attributes Density Estimator Probability Input Attributes Classifier Prediction of categorical output Input Attributes

Slide 58 Copyright © Andrew W. Moore Evaluating Density Estimation Regressor Prediction of real-valued output Input Attributes Density Estimator Probability Input Attributes Classifier Prediction of categorical output Input Attributes Test set Accuracy ? Test-set criterion for estimating performance on future data* * See the Decision Tree or Cross Validation lecture for more detail

Slide 59 Copyright © Andrew W. Moore Given a record x, a density estimator M can tell you how likely the record is: Given a dataset with R records, a density estimator can tell you how likely the dataset is: (Under the assumption that all records were independently generated from the Density Estimator’s JD) Evaluating a density estimator

Slide 60 Copyright © Andrew W. Moore A small dataset: Miles Per Gallon From the UCI repository (thanks to Ross Quinlan) 192 Training Set Records

Slide 61 Copyright © Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records

Slide 62 Copyright © Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records

Slide 63 Copyright © Andrew W. Moore Log Probabilities Since probabilities of datasets get so small we usually use log probabilities

Slide 64 Copyright © Andrew W. Moore A small dataset: Miles Per Gallon 192 Training Set Records

Slide 65 Copyright © Andrew W. Moore Summary: The Good News We have a way to learn a Density Estimator from data. Density estimators can do many good things… Can sort the records by probability, and thus spot weird records (anomaly detection) Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc Ingredient for Bayes Classifiers (see later)

Slide 66 Copyright © Andrew W. Moore Summary: The Bad News Density estimation by directly learning the joint is trivial, mindless and dangerous

Slide 67 Copyright © Andrew W. Moore Using a test set An independent test set with 196 cars has a worse log likelihood (actually it’s a billion quintillion quintillion quintillion quintillion times less likely) ….Density estimators can overfit. And the full joint density estimator is the overfittiest of them all!

Slide 68 Copyright © Andrew W. Moore Overfitting Density Estimators If this ever happens, it means there are certain combinations that we learn are impossible

Slide 69 Copyright © Andrew W. Moore Using a test set The only reason that our test set didn’t score -infinity is that my code is hard-wired to always predict a probability of at least one in We need Density Estimators that are less prone to overfitting

Slide 70 Copyright © Andrew W. Moore Naïve Density Estimation The problem with the Joint Estimator is that it just mirrors the training data. We need something which generalizes more usefully. The naïve model generalizes strongly: Assume that each attribute is distributed independently of any of the other attributes.

Slide 71 Copyright © Andrew W. Moore Independently Distributed Data Let x[i] denote the i’th field of record x. The independently distributed assumption says that for any i,v, u 1 u 2 … u i-1 u i+1 … u M Or in other words, x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]} This is often written as

Slide 72 Copyright © Andrew W. Moore A note about independence Assume A and B are Boolean Random Variables. Then “A and B are independent” if and only if P(A|B) = P(A) “A and B are independent” is often notated as

Slide 73 Copyright © Andrew W. Moore Independence Theorems Assume P(A|B) = P(A) Then P(A^B) = = P(A) P(B) Assume P(A|B) = P(A) Then P(B|A) = = P(B)

Slide 74 Copyright © Andrew W. Moore Independence Theorems Assume P(A|B) = P(A) Then P(~A|B) = = P(~A) Assume P(A|B) = P(A) Then P(A|~B) = = P(A)

Slide 75 Copyright © Andrew W. Moore Multivalued Independence For multivalued Random Variables A and B, if and only if from which you can then prove things like…

Slide 76 Copyright © Andrew W. Moore Back to Naïve Density Estimation Let x[i] denote the i’th field of record x: Naïve DE assumes x[i] is independent of {x[1],x[2],..x[i-1], x[i+1],…x[M]} Example: Suppose that each record is generated by randomly shaking a green dice and a red dice Dataset 1: A = red value, B = green value Dataset 2: A = red value, B = sum of values Dataset 3: A = sum of values, B = difference of values Which of these datasets violates the naïve assumption?

Slide 77 Copyright © Andrew W. Moore Using the Naïve Distribution Once you have a Naïve Distribution you can easily compute any row of the joint distribution. Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)?

Slide 78 Copyright © Andrew W. Moore Using the Naïve Distribution Once you have a Naïve Distribution you can easily compute any row of the joint distribution. Suppose A, B, C and D are independently distributed. What is P(A^~B^C^~D)? = P(A|~B^C^~D) P(~B^C^~D) = P(A) P(~B^C^~D) = P(A) P(~B|C^~D) P(C^~D) = P(A) P(~B) P(C^~D) = P(A) P(~B) P(C|~D) P(~D) = P(A) P(~B) P(C) P(~D)

Slide 79 Copyright © Andrew W. Moore Naïve Distribution General Case Suppose x[1], x[2], … x[M] are independently distributed. So if we have a Naïve Distribution we can construct any row of the implied Joint Distribution on demand. So we can do any inference But how do we learn a Naïve Density Estimator?

Slide 80 Copyright © Andrew W. Moore Learning a Naïve Density Estimator Another trivial learning algorithm!

Slide 81 Copyright © Andrew W. Moore Contrast Joint DENaïve DE Can model anythingCan model only very boring distributions No problem to model “C is a noisy copy of A” Outside Naïve’s scope Given 100 records and more than 6 Boolean attributes will screw up badly Given 100 records and 10,000 multivalued attributes will be fine

Slide 82 Copyright © Andrew W. Moore Empirical Results: “Hopeless” The “hopeless” dataset consists of 40,000 records and 21 Boolean attributes called a,b,c, … u. Each attribute in each record is generated randomly as 0 or 1. Despite the vast amount of data, “Joint” overfits hopelessly and does much worse Average test set log probability during 10 folds of k-fold cross-validation* Described in a future Andrew lecture

Slide 83 Copyright © Andrew W. Moore Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The DE learned by “Joint” The DE learned by “Naive”

Slide 84 Copyright © Andrew W. Moore Empirical Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The DE learned by “Joint” The DE learned by “Naive”

Slide 85 Copyright © Andrew W. Moore A tiny part of the DE learned by “Joint” Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes The DE learned by “Naive”

Slide 86 Copyright © Andrew W. Moore A tiny part of the DE learned by “Joint” Empirical Results: “MPG” The “MPG” dataset consists of 392 records and 8 attributes The DE learned by “Naive”

Slide 87 Copyright © Andrew W. Moore The DE learned by “Joint” Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes The DE learned by “Naive”

Slide 88 Copyright © Andrew W. Moore The DE learned by “Joint” Empirical Results: “Weight vs. MPG” Suppose we train only from the “Weight” and “MPG” attributes The DE learned by “Naive”

Slide 89 Copyright © Andrew W. Moore The DE learned by “Joint” “Weight vs. MPG”: The best that Naïve can do The DE learned by “Naive”

Slide 90 Copyright © Andrew W. Moore Reminder: The Good News We have two ways to learn a Density Estimator from data. *In other lectures we’ll see vastly more impressive Density Estimators (Mixture Models, Bayesian Networks, Density Trees, Kernel Densities and many more) Density estimators can do many good things… Anomaly detection Can do inference: P(E1|E2) Automatic Doctor / Help Desk etc Ingredient for Bayes Classifiers

Slide 91 Copyright © Andrew W. Moore Bayes Classifiers A formidable and sworn enemy of decision trees Classifier Prediction of categorical output Input Attributes DT BC

Slide 92 Copyright © Andrew W. Moore How to build a Bayes Classifier Assume you want to predict output Y which has arity n Y and values v 1, v 2, … v ny. Assume there are m input attributes called X 1, X 2, … X m Break dataset into n Y smaller datasets called DS 1, DS 2, … DS ny. Define DS i = Records in which Y=v i For each DS i, learn Density Estimator M i to model the input distribution among the Y=v i records.

Slide 93 Copyright © Andrew W. Moore How to build a Bayes Classifier Assume you want to predict output Y which has arity n Y and values v 1, v 2, … v ny. Assume there are m input attributes called X 1, X 2, … X m Break dataset into n Y smaller datasets called DS 1, DS 2, … DS ny. Define DS i = Records in which Y=v i For each DS i, learn Density Estimator M i to model the input distribution among the Y=v i records. M i estimates P(X 1, X 2, … X m | Y=v i )

Slide 94 Copyright © Andrew W. Moore How to build a Bayes Classifier Assume you want to predict output Y which has arity n Y and values v 1, v 2, … v ny. Assume there are m input attributes called X 1, X 2, … X m Break dataset into n Y smaller datasets called DS 1, DS 2, … DS ny. Define DS i = Records in which Y=v i For each DS i, learn Density Estimator M i to model the input distribution among the Y=v i records. M i estimates P(X 1, X 2, … X m | Y=v i ) Idea: When a new set of input values (X 1 = u 1, X 2 = u 2, …. X m = u m ) come along to be evaluated predict the value of Y that makes P(X 1, X 2, … X m | Y=v i ) most likely Is this a good idea?

Slide 95 Copyright © Andrew W. Moore How to build a Bayes Classifier Assume you want to predict output Y which has arity n Y and values v 1, v 2, … v ny. Assume there are m input attributes called X 1, X 2, … X m Break dataset into n Y smaller datasets called DS 1, DS 2, … DS ny. Define DS i = Records in which Y=v i For each DS i, learn Density Estimator M i to model the input distribution among the Y=v i records. M i estimates P(X 1, X 2, … X m | Y=v i ) Idea: When a new set of input values (X 1 = u 1, X 2 = u 2, …. X m = u m ) come along to be evaluated predict the value of Y that makes P(X 1, X 2, … X m | Y=v i ) most likely Is this a good idea? This is a Maximum Likelihood classifier. It can get silly if some Ys are very unlikely

Slide 96 Copyright © Andrew W. Moore How to build a Bayes Classifier Assume you want to predict output Y which has arity n Y and values v 1, v 2, … v ny. Assume there are m input attributes called X 1, X 2, … X m Break dataset into n Y smaller datasets called DS 1, DS 2, … DS ny. Define DS i = Records in which Y=v i For each DS i, learn Density Estimator M i to model the input distribution among the Y=v i records. M i estimates P(X 1, X 2, … X m | Y=v i ) Idea: When a new set of input values (X 1 = u 1, X 2 = u 2, …. X m = u m ) come along to be evaluated predict the value of Y that makes P(Y=v i | X 1, X 2, … X m ) most likely Is this a good idea? Much Better Idea

Slide 97 Copyright © Andrew W. Moore Terminology MLE (Maximum Likelihood Estimator): MAP (Maximum A-Posteriori Estimator):

Slide 98 Copyright © Andrew W. Moore Getting what we need

Slide 99 Copyright © Andrew W. Moore Getting a posterior probability

Slide 100 Copyright © Andrew W. Moore Bayes Classifiers in a nutshell 1. Learn the distribution over inputs for each value Y. 2. This gives P(X 1, X 2, … X m | Y=v i ). 3. Estimate P(Y=v i ). as fraction of records with Y=v i. 4. For a new prediction:

Slide 101 Copyright © Andrew W. Moore Bayes Classifiers in a nutshell 1. Learn the distribution over inputs for each value Y. 2. This gives P(X 1, X 2, … X m | Y=v i ). 3. Estimate P(Y=v i ). as fraction of records with Y=v i. 4. For a new prediction: We can use our favorite Density Estimator here. Right now we have two options: Joint Density Estimator Naïve Density Estimator

Slide 102 Copyright © Andrew W. Moore Joint Density Bayes Classifier In the case of the joint Bayes Classifier this degenerates to a very simple rule: Y predict = the most common value of Y among records in which X 1 = u 1, X 2 = u 2, …. X m = u m. Note that if no records have the exact set of inputs X 1 = u 1, X 2 = u 2, …. X m = u m, then P(X 1, X 2, … X m | Y=v i ) = 0 for all values of Y. In that case we just have to guess Y’s value

Slide 103 Copyright © Andrew W. Moore Joint BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Joint BC”

Slide 104 Copyright © Andrew W. Moore Joint BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25

Slide 105 Copyright © Andrew W. Moore Naïve Bayes Classifier In the case of the naive Bayes Classifier this can be simplified:

Slide 106 Copyright © Andrew W. Moore Naïve Bayes Classifier In the case of the naive Bayes Classifier this can be simplified: Technical Hint: If you have 10,000 input attributes that product will underflow in floating point math. You should use logs:

Slide 107 Copyright © Andrew W. Moore BC Results: “XOR” The “XOR” dataset consists of 40,000 records and 2 Boolean inputs called a and b, generated randomly as 0 or 1. c (output) = a XOR b The Classifier learned by “Naive BC” The Classifier learned by “Joint BC”

Slide 108 Copyright © Andrew W. Moore Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Naive BC”

Slide 109 Copyright © Andrew W. Moore Naive BC Results: “Logical” The “logical” dataset consists of 40,000 records and 4 Boolean attributes called a,b,c,d where a,b,c are generated randomly as 0 or 1. D = A^~C, except that in 10% of records it is flipped The Classifier learned by “Joint BC” This result surprised Andrew until he had thought about it a little

Slide 110 Copyright © Andrew W. Moore Naïve BC Results: “All Irrelevant” The “all irrelevant” dataset consists of 40,000 records and 15 Boolean attributes called a,b,c,d..o where a,b,c are generated randomly as 0 or 1. v (output) = 1 with probability 0.75, 0 with prob 0.25 The Classifier learned by “Naive BC”

Slide 111 Copyright © Andrew W. Moore BC Results: “MPG”: 392 records The Classifier learned by “Naive BC”

Slide 112 Copyright © Andrew W. Moore BC Results: “MPG”: 40 records

Slide 113 Copyright © Andrew W. Moore More Facts About Bayes Classifiers Many other density estimators can be slotted in*. Density estimation can be performed with real-valued inputs* Bayes Classifiers can be built with real-valued inputs* Rather Technical Complaint: Bayes Classifiers don’t try to be maximally discriminative---they merely try to honestly model what’s going on* Zero probabilities are painful for Joint and Naïve. A hack (justifiable with the magic words “Dirichlet Prior”) can help*. Naïve Bayes is wonderfully cheap. And survives 10,000 attributes cheerfully! *See future Andrew Lectures

Slide 114 Copyright © Andrew W. Moore What you should know Probability Fundamentals of Probability and Bayes Rule What’s a Joint Distribution How to do inference (i.e. P(E1|E2)) once you have a JD Density Estimation What is DE and what is it good for How to learn a Joint DE How to learn a naïve DE

Slide 115 Copyright © Andrew W. Moore What you should know Bayes Classifiers How to build one How to predict with a BC Contrast between naïve and joint BCs

Slide 116 Copyright © Andrew W. Moore Interesting Questions Suppose you were evaluating NaiveBC, JointBC, and Decision Trees Invent a problem where only NaiveBC would do well Invent a problem where only Dtree would do well Invent a problem where only JointBC would do well Invent a problem where only NaiveBC would do poorly Invent a problem where only Dtree would do poorly Invent a problem where only JointBC would do poorly