Bayesian Learning and Learning Bayesian Networks.

Slides:



Advertisements
Similar presentations
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Advertisements

ITCS 3153 Artificial Intelligence Lecture 24 Statistical Learning Chapter 20 Lecture 24 Statistical Learning Chapter 20.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CS479/679 Pattern Recognition Dr. George Bebis
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
Flipping A Biased Coin Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations,
.. . Parameter Estimation using likelihood functions Tutorial #1 This class has been cut and slightly edited from Nir Friedman’s full course of 12 lectures.
Model Assessment, Selection and Averaging
Intro to Bayesian Learning Exercise Solutions Ata Kaban The University of Birmingham 2005.
What is Statistical Modeling
Parameter Estimation using likelihood functions Tutorial #1
Statistical Learning Methods Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 20 (20.1, 20.2, 20.3, 20.4) Fall 2005.
Ai in game programming it university of copenhagen Statistical Learning Methods Marco Loog.
Chapter 20 of AIMA KAIST CS570 Lecture note
Bayesian Learning, Regression-based learning. Overview  Bayesian Learning  Full  MAP learning  Maximum Likelihood Learning  Learning Bayesian Networks.
Visual Recognition Tutorial
Overview Full Bayesian Learning MAP learning
Bayesian Learning Rong Jin. Outline MAP learning vs. ML learning Minimum description length principle Bayes optimal classifier Bagging.
1 Chapter 12 Probabilistic Reasoning and Bayesian Belief Networks.
Statistical Learning: Bayesian and ML COMP155 Sections May 2, 2007.
Lecture 5: Learning models using EM
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Review P(h i | d) – probability that the hypothesis is true, given the data (effect  cause) Used by MAP: select the hypothesis that is most likely given.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Evaluating Hypotheses
. PGM: Tirgul 10 Parameter Learning and Priors. 2 Why learning? Knowledge acquisition bottleneck u Knowledge acquisition is an expensive process u Often.
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Bayesian Learning Rong Jin.
Learning Bayesian Networks
Thanks to Nir Friedman, HU
Rutgers CS440, Fall 2003 Introduction to Statistical Learning Reading: Ch. 20, Sec. 1-4, AIMA 2 nd Ed.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
Crash Course on Machine Learning
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
PATTERN RECOGNITION AND MACHINE LEARNING
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Naive Bayes Classifier
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Statistical Learning (From data to distributions).
Empirical Research Methods in Computer Science Lecture 7 November 30, 2005 Noah Smith.
Bayesian Classification. Bayesian Classification: Why? A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities.
CSC 2535 Lecture 8 Products of Experts Geoffrey Hinton.
Elementary manipulations of probabilities Set probability of multi-valued r.v. P({x=Odd}) = P(1)+P(3)+P(5) = 1/6+1/6+1/6 = ½ Multi-variant distribution:
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
CIAR Summer School Tutorial Lecture 1b Sigmoid Belief Nets Geoffrey Hinton.
Web-Mining Agents Prof. Dr. Ralf Möller Universität zu Lübeck Institut für Informationssysteme Karsten Martiny (Übungen)
Review of statistical modeling and probability theory Alan Moses ML4bio.
1 Machine Learning: Lecture 6 Bayesian Learning (Based on Chapter 6 of Mitchell T.., Machine Learning, 1997)
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
CSC2535 Lecture 5 Sigmoid Belief Nets
BAYESIAN LEARNING. 2 Bayesian Classifiers Bayesian classifiers are statistical classifiers, and are based on Bayes theorem They can calculate the probability.
CSC Lecture 23: Sigmoid Belief Nets and the wake-sleep algorithm Geoffrey Hinton.
Naive Bayes Classifier. REVIEW: Bayesian Methods Our focus this lecture: – Learning and classification methods based on probability theory. Bayes theorem.
CSC321: Lecture 8: The Bayesian way to fit models Geoffrey Hinton.
Web-Mining Agents Part: Data Mining
Lecture 1.31 Criteria for optimal reception of radio signals.
Chapter 3: Maximum-Likelihood Parameter Estimation
Naive Bayes Classifier
CS 416 Artificial Intelligence
Data Mining Lecture 11.
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Parametric Methods Berlin Chen, 2005 References:
Learning From Observed Data
Machine Learning: Lecture 6
Machine Learning: UNIT-3 CHAPTER-1
Naive Bayes Classifier
Presentation transcript:

Bayesian Learning and Learning Bayesian Networks

Overview  Full Bayesian Learning  MAP learning  Maximun Likelihood Learning  Learning Bayesian Networks Fully observable With hidden (unobservable) variables

Full Bayesian Learning  In the learning methods we have seen so far, the idea was always to find the best model that could explain some observations  In contrast, full Bayesian learning sees learning as Bayesian updating of a probability distribution over the hypothesis space, given data H is the hypothesis variable Possible hypotheses (values of H) h 1 …, h n P(H) = prior probability distribution over hypotesis space  j th observation d j gives the outcome of random variable D j training data d= d 1,..,d k

 Given the data so far, each hypothesis h i has a posterior probability: P(h i |d) = αP(d| h i ) P(h i ) (Bayes theorem) where P(d| h i ) is called the likelihood of the data under each hypothesis  Predictions over a new entity X are a weighted average over the prediction of each hypothesis: P(X|d) = = ∑ i P(X, h i |d) = ∑ i P(X| h i,d) P(h i |d) = ∑ i P(X| h i ) P(h i |d) ~ ∑ i P(X| h i ) P(d| h i ) P(h i ) The weights are given by the data likelihood and prior of each h  No need to pick one best-guess hypothesis! The data does not add anything to a prediction given an hp Full Bayesian Learning

 Suppose we have 5 types of candy bags 10% are 100% cherry candies (h 100, P(h 100 )= 0.1) 20% are 75% cherry + 25% lime candies (h 75, P(h 75 )= 0.2) 50% are 50% cherry + 50% lime candies (h 50, P(h 50 )= 0.4) 20% are 25% cherry + 75% lime candies (h 25, P(h 25 )= 0.2) 10% are 100% lime candies (h 0,P(h 100 )= 0.1) The we observe candies drawn from some bag  Let’s call θ the parameter that defines the fraction of cherry candy in a bag, and h θ the corresponding hypothesis  Which of the five kinds of bag has generated my 10 observations? P(h θ |d).  What flavour will the next candy be? Prediction P(X|d) Example

 If we re-wrap each candy and return it to the bag, our 10 observations are independent and identically distributed, i.i.d, so P(d| h θ ) = ∏ j P(d j | h θ ) for j=1,..,10  For a given h θ, the value of P(d j | h θ ) is P(d j = cherry| h θ ) = θ; P(d j = lime|h θ ) = (1-θ)  And given N observations, of which c are cherry and l = N-c lime Binomial distribution: probability of # of successes in a sequence of N independent trials with binary outcome, each of which yields success with probability θ.  For instance, after observing 3 lime candies in a row: P([lime, lime, lime| h 50 ) = because the probability of seeing lime for each observation is 0.5 under this hypotheses

Example  If we re-wrap each candy and return it to the bag, our 10 observations are independent and identically distributed, i.i.d, so P(d| h θ ) = ∏ j P(d j | h θ ) for j=1,..,10  For a given h θ, the value of P(d j | h θ ) is P(d j = cherry| h θ ) = θ; P(d j = lime|h θ ) = (1-θ)  And given N observations, of which c are cherry and l = N-c lime Binomial distribution: probability of # of successes in a sequence of N independent trials with binary outcome, each of which yields success with probability θ.  For instance, after observing 3 lime candies in a row: P([lime, lime, lime| h 50 ) = because the probability of seeing lime for each observation is 0.5 under this hypotheses

Example  If we re-wrap each candy and return it to the bag, our 10 observations are independent and identically distributed, i.i.d, so P(d| h θ ) =  For a given h θ, the value of P(d j | h θ ) is P(d j = cherry| h θ ) = ; P(d j = lime|h θ ) =  And given N observations, of which c are cherry and l = N-c lime Binomial distribution: probability of # of successes in a sequence of N independent trials with binary outcome, each of which yields success with probability θ.  For instance, after observing 3 lime candies in a row: P([lime, lime, lime| h 50 ) =

 Initially, the hp with higher priors dominate (h 50 with prior = 0.4)  As data comes in, the true hypothesis (h 0 ) starts dominating, as the probability of seeing this data given the other hypotheses gets increasingly smaller After seeing three lime candies in a row, the probability that the bag is the all-lime one starts taking off P(h 100 |d) P(h 75 |d) P(h 50 |d) P(h 25 |d) P(h 0 |d) P(h i |d) = αP(d| h i ) P(h i ) Posterior Probability of H

Prediction Probability  The probability that the next candy is lime increases with the probability that the bag is an all-lime one ∑ i P(next candy is lime| h i ) P(h i |d)

Overview  Full Bayesian Learning  MAP learning  Maximun Likelihood Learning  Learning Bayesian Networks Fully observable With hidden (unobservable) variables

MAP approximation  Full Bayesian learning seems like a very safe bet, but unfortunately it does not work well in practice Summing over the hypothesis space is often intractable (e.g., 18,446,744,073,709,551,616 Boolean functions of 6 attributes)  Very common approximation: Maximum a posterior (MAP) learning: Instead of doing prediction by considering all possible hypotheses, as in P(X|d) = ∑ i P(X| h i ) P(h i |d) Make predictions based on h MAP that maximises P(h i |d) I.e., maximize P(d| h i ) P(h i ) P(X|d)~ P(X| h MAP )

MAP approximation  Map is a good approximation when P(X |d) ≈ P(X| h MAP ) In our example, h MAP is the all-lime bag after only 3 candies, predicting that the next candy will be lime with p =1 the bayesian learner gave a prediction of 0.8, safer after seeing only 3 candies P(h 100 |d) P(h 75 |d) P(h 50 |d) P(h 25 |d) P(h 0 |d)

Bias  As more data arrive, MAP and Bayesian prediction become closer, as MAP’s competing hypotheses become less likely  Often easier to find MAP (optimization problem) than deal with a large summation problem  P(H) plays an important role in both MAP and Full Bayesian Learning Defines the learning bias, i.e. which hypotheses are favoured  Used to define a tradeoff between model complexity and its ability to fit the data More complex models can explain the data better => higher P(d| h i ) danger of overfitting But they are less likely a priory because there are more of them than simpler models => lower P(h i ) I.e. common learning bias is to penalize complexity

Overview  Full Bayesian Learning  MAP learning  Maximun Likelihood Learning  Learning Bayesian Networks Fully observable With hidden (unobservable) variables

Maximum Likelihood (ML)Learning  Further simplification over full Bayesian and MAP learning Assume uniform priors over the space of hypotheses MAP learning (maximize P(d| h i ) P(h i )) reduces to maximize P(d| h i )  When is ML appropriate?

Maximum Likelihood (ML) Learning  Further simplification over Full Bayesian and MAP learning Assume uniform prior over the space of hypotheses MAP learning (maximize P(d| h i ) P(h i )) reduces to maximize P(d| h i )  When is ML appropriate? Used in statistics as the standard (non-bayesian) statistical learning method by those distrust subjective nature of hypotheses priors When the competing hypotheses are indeed equally likely (e.g. have same complexity) With very large datasets, for which P(d| h i ) tends to overcome the influence of P(h i ))

Overview  Full Bayesian Learning  MAP learning  Maximun Likelihood Learning  Learning Bayesian Networks Fully observable (complete data) With hidden (unobservable) variables

Learning BNets: Complete Data  We will start by applying ML to the simplest type of BNets learning: known structure Data containing observations for all variables All variables are observable, no missing data  The only thing that we need to learn are the network’s parameters

ML learning: example  Back to the candy example: New candy manufacturer that does not provide data on the probability of different types of bags, i.e. fraction θ of cherry candy Any θ is possible: continuum of hypotheses h θ Reasonable to assume that all θ are equally likely (we have no evidence of the contrary): uniform distribution P(h θ ) θ is a parameter for this simple family of models, that we need to learn  Simple network to represent this problem Flavor represents the event of drawing a cherry vs. lime candy from the bag P(F=cherry), or P(cherry) for brevity, is equivalent to the fraction θ of cherry candies in the bag  We want to infer θ by unwrapping N candies from the bag

 Unwrap N candies, c cherries and l = N-c lime (and return each candy in the bag after observing flavor)  As we saw earlier, this is described by a binomial distribution P(d| h θ ) = ∏ j P(d j | h θ ) = θ c (1- θ) l  With ML we want to find θ that maximizes this expression, or equivalently its log likelihood (L) L(P(d| h θ ) = log ( ∏ j P(d j | h θ )) = log (θ c (1- θ) l ) = clogθ + l log(1- θ) ML learning: example (cont’d)

 To maximise, we differentiate L(P(d| h θ ) with respect to θ and set the result to 0 ML learning: example (cont’d)  Doing the math gives the proportion of cherries in the bag is equal to the proportion (frequency) of in cherries in the data

Frequencies as Priors  So this says that the proportion of cherries in the bag is equal to the proportion (frequency) of in cherries in the data  We have already used frequencies to learn the probabilities of the PoS tagger HMM in the homework  Now we have justified why this approach provides a reasonable estimate of node priors

General ML procedure  Express the likelihood of the data as a function of the parameters to be learned  Take the derivative of the log likelihood with respect of each parameter  Find the parameter value that makes the derivative equal to 0  The last step can be computationally very expensive in real- world learning tasks

Another example  The manufacturer choses the color of the wrapper probabilistically for each candy based on flavor, following an unknown distribution If the flavour is cherry, it chooses a red wrapper with probability θ 1 If the flavour is lime, it chooses a red wrapper with probability θ 2  The Bayesian network for this problem includes 3 parameters to be learned θ θ 1 θ 2

Another example  The manufacturer choses the color of the wrapper probabilistically for each candy based on flavor, following an unknown distribution If the flavour is cherry, it chooses a red wrapper with probability θ 1 If the flavour is lime, it chooses a red wrapper with probability θ 2  The Bayesian network for this problem includes 3 parameters to be learned θ θ 1 θ 2

Another example (cont’d)  P( W=green, F = cherry| h θθ 1 θ 2 ) = (*)  We unwrap N candies c are cherry and l are lime r c cherry with red wrapper, g c cherry with green wrapper r l lime with red wrapper, g l lime with green wrapper every trial is a combination of wrapper and candy flavor similar to event (*) above, so  P(d| h θθ 1 θ 2 ) =

Another example (cont’d)  P( W=green, F = cherry| h θθ 1 θ 2 ) = (*) = P( W=green|F = cherry, h θθ 1 θ 2 ) P( F = cherry| h θθ 1 θ 2 ) = θ (1-θ 1 )  We unwrap N candies c are cherry and l are lime r c cherry with red wrapper, g c cherry with green wrapper r l lime with red wrapper, g l lime with green wrapper every trial is a combination of wrapper and candy flavor similar to event (*) above, so  P(d| h θθ 1 θ 2 ) = ∏ j P(d j | h θθ 1 θ 2 ) = θ c (1-θ) l (θ 1 ) r c (1-θ 1 ) g c (θ 2 ) r l (1-θ 2 ) g l

Another example (cont’d)  I want to maximize the log of this expression clogθ + l log(1- θ) + r c log θ 1 + g c log(1- θ 1 ) + r l log θ 2 + g l log(1- θ 2 )  Take derivative with respect of each of θ, θ 1,θ 2 The terms that not containing the derivation variable disappear

Another example (cont’d)  I want to maximize the log of this expression clogθ + l log(1- θ) + r c log θ 1 + g c log(1- θ 1 ) + r l log θ 2 + g l log(1- θ 2 )  Take derivative with respect of each of θ, θ 1,θ 2 The terms not containing the derivation variable disappear

ML parameter learning in Bayes nets  Frequencies again!  This process generalizes to every fully observable Bnet.  With complete data and ML approach: Parameters learning decomposes into a separate learning problem for each parameter (CPT), because of the log likelihood step Each parameter is given by the frequency of the desired child value given the relevant parents values

Very Popular Application  Naïve Bayes models: very simple Bayesian networks for classification Class variable (to be predicted) is the root node Attribute variables X i (observations) are the leaves  Naïve because it assumes that the attributes are conditionally independent of each other given the class  Deterministic prediction can be obtained by picking the most likely class  Scales up really well: with n boolean attributes we just need……. C X1X1 XiXi X2X2

Very Popular Application  Naïve Bayes models: very simple Bayesian networks for classification Class variable (to be predicted) is the root node Attribute variables X i (observations) are the leaves  Naïve because it assumes that the attributes are conditionally independent of each other given the class  Deterministic prediction can be obtained by picking the most likely class  Scales up really well: with n boolean attributes we just need 2n+1 parameters C X1X1 XiXi X2X2

Example  Naïve Classifier for the newsgroup reading example

Example  Naïve Classifier for the newsgroup reading example

Problem with ML parameter learning  With small datasets, some of the frequencies may be 0 just because we have not observed the relevant data  Generates very strong incorrect predictions: Common fix: initialize the count of every relevant event to 1 before counting the observations

Probability from Experts  As we mentioned in previous lectures, an alternative to learning probabilities from data is to get them from experts  Problems Experts may be reluctant to commit to specific probabilities that cannot be refined How to represent the confidence in a given estimate Getting the experts and their time in the first place  One promising approach is to leverage both sources when they are available Get initial estimates from experts Refine them with data

Combining Experts and Data  Get the expert to express her belief on event A as the pair i.e. how many observations of A they have seen (or expect to see) in m trials  Combine the pair with actual data If A is observed, increment both n and m If ⌐A is observed, increment m alone  The absolute values in the pair can be used to express the expert’s level of confidence in her estimate Small values (e.g., ) represent low confidence The larger the values, the higher the confidence WHY?

Combining Experts and Data  Get the expert to express her belief on event A as the pair i.e. how many observations of A they have seen (or expect to see) in m trials  Combine the pair with actual data If A is observed, increment both n and m If ⌐A is observed, increment m alone  The absolute values in the pair can be used to express the expert’s level of confidence in her estimate Small values (e.g., ) represent low confidence, as they are quickly dominated by data The larger the values, the higher the confidence as it takes more and more data to dominate the initial estimate (e.g. )