Download presentation
Presentation is loading. Please wait.
1
Computer Science Department
Bayesian Learning Computer Science Department CS 9633 Machine Learning
2
Computer Science Department
Bayesian Learning Probabilistic approach to inference Assumption Quantities of interest are governed by probability distribution Optimal decisions can be made by reasoning about probabilities and observations Provides quantitative approach to weighing how evidence supports alternative hypotheses Computer Science Department CS 9633 Machine Learning
3
Why is Bayesian Learning Important?
Some Bayesian approaches (like naive Bayes) are very practical learning approaches and competitive with other approaches Provides a useful perspective for understanding many learning algorithms that do not explicitly manipulate probabilities Computer Science Department CS 9633 Machine Learning
4
Computer Science Department
Important Features Model is incrementally updated with training examples Prior knowledge can be combined with observed data to determine the final probability of the hypothesis Asserting prior probability of candidate hypotheses Asserting a probability distribution over observations for each hypothesis Can accommodate methods that make probabilistic predictions New instances can be classified by combining predictions of multiple hypotheses Can provide a gold standard for evaluating hypotheses Computer Science Department CS 9633 Machine Learning
5
Computer Science Department
Practical Problems Typically require initial knowledge of many probabilities. Can be estimated by: Background knowledge Previously available data Assumptions about distribution Significant computational cost of determining Bayes optimal hypothesis in general linear in number of hypotheses in general case Significantly lower for certain situations Computer Science Department CS 9633 Machine Learning
6
Computer Science Department
Bayes Theorem Goal: learn the “best” hypothesis Assumption in Bayes learning: the “best” hypothesis is the most probable hypothesis Bayes theorem allows computation of most probable hypothesis based on Prior probability of hypothesis Probability of observing certain data given the hypothesis Observed data itself Computer Science Department CS 9633 Machine Learning
7
Computer Science Department
Notation P(h) Prior probability of h P(D) Prior probability of D P(D|h) Probability of D given h posterior probability of D given h likelihood of Data given h P(h|D) Probability that h holds, given the data Computer Science Department CS 9633 Machine Learning
8
Computer Science Department
Bayes Theorem Based on definitions of P(D|h) and P(h|D) D h Computer Science Department CS 9633 Machine Learning
9
Maximum A Posteriori Hypothesis
Many learning algorithms try to identify the most probable hypothesis h H given observations D This is the maximum a posteriori hypothesis (MAP hypothesis) Computer Science Department CS 9633 Machine Learning
10
Identifying the MAP Hypothesis using Bayes Theorem
Computer Science Department CS 9633 Machine Learning
11
Equally Probable Hypotheses
Any hypothesis that maximizes P(D|h) is a Maximum Likelihood (ML) hypothesis Computer Science Department CS 9633 Machine Learning
12
Bayes Theorem and Concept Learning
Concept Learning Task H Hypothesis space X Instance space c: X{0,1} Computer Science Department CS 9633 Machine Learning
13
Brute-Force MAP Learning Algorithm
For each hypothesis h in H, calculate the posterior probability Output the hypothesis with the highest posterior probability Computer Science Department CS 9633 Machine Learning
14
To Apply Brute Force MAP Learning
Specify P(h) Specify P(D|h) Computer Science Department CS 9633 Machine Learning
15
Computer Science Department
An Example Assume Training data D is noise free (di = c(xi)) The target concept is contained in H We have no a priori reason to believe one hypothesis is more likely than any other Computer Science Department CS 9633 Machine Learning
16
Probability of Data Given Hypothesis
Computer Science Department CS 9633 Machine Learning
17
Computer Science Department
Apply the algorithm Step 1 (2 cases) Case 1 (D is inconsistent with h) Case 2 (D is consistent with h) Computer Science Department CS 9633 Machine Learning
18
Computer Science Department
Step 2 Every consistent hypothesis has probability 1/|VSH,D| Every inconsistent hypothesis has probability 0 Computer Science Department CS 9633 Machine Learning
19
MAP hypothesis and consistent learners
FIND-S (finds maximally specific consistent hypothesis) Candidate-Elimination (finds all consistent hypotheses. Computer Science Department CS 9633 Machine Learning
20
Maximum Likelihood and Least-Squared Error Learning
New problem: learning a continuous-valued target function Will show that under certain assumptions, any learning algorithm that minimized the squared error between output hypotheses on training data will output a maximum likelihood hypothesis. Computer Science Department CS 9633 Machine Learning
21
Computer Science Department
Problem Setting Learner L Instance space X Hypothesis space H h: XR Task of L is to learn unknown target function f: XR Have m examples Target value for each example is corrupted by random noise drawn from Normal distribution Computer Science Department CS 9633 Machine Learning
22
Work Through Derivation
Computer Science Department CS 9633 Machine Learning
23
Why Normal Distribution for Noise?
Its easy to work with Good approximation of many physical processes Important point: we are only dealing with noise in the target function—not the attribute values. Computer Science Department CS 9633 Machine Learning
24
Bayes Optimal Classifier
Two Questions: What is the most probable hypothesis given the training data? Find MAP hypothesis What is the most probable classification given the training data? Computer Science Department CS 9633 Machine Learning
25
Computer Science Department
Example Three hypotheses: P(h1|D) = 0.35 P(h2|D) = 0.45 P(h3|D) = 0.20 New instance x h1 predicts negative h2 predicts positive h3 predicts negative What is the predicted class using hMAP? What is the predicted class using all hypotheses? Computer Science Department CS 9633 Machine Learning
26
Bayes Optimal Classification
The most probable classification of a new instance is obtained by combining the predictions of all hypotheses, weighted by their posterior probabilities. Suppose set of values for classification is from set V (each possible value is vj) Probability that vj is the correct classification for new instance is: Pick the vj with the max probability as the predicted class Computer Science Department CS 9633 Machine Learning
27
Bayes Optimal Classifier
Apply this to the previous example: Computer Science Department CS 9633 Machine Learning
28
Bayes Optimal Classification
Gives the optimal error-minimizing solution to prediction and classification problems. Requires probability of exact combination of evidence All classification methods can be viewed as approximations of Bayes rule with varying assumptions about conditional probabilities Assume they come from some distribution Assume conditional independence Assume underlying model of specific format (linear combination of evidence, decision tree) Computer Science Department CS 9633 Machine Learning
29
Simplifications of Bayes Rule
Given observations of attribute values a1, a2, …an,, compute the most probable target value vMAP Use Bayes Theorem to rewrite Computer Science Department CS 9633 Machine Learning
30
Computer Science Department
Naïve Bayes The most usual simplification of Bayes Rule is to assume conditional independence of the observations Because it is approximately true Because it is computationally convenient Assume the probability of observing the conjunction a1, a2, …an is the product of the probabilities of the individual attributes Learning consists of estimating probabilities Computer Science Department CS 9633 Machine Learning
31
Computer Science Department
Simple Example Two classes C1 and C2. Two features a Male, Female a Blue eyes, Brown eyes Instance (Male with blue eyes) What is the class? Probability C1 C2 P(Ci) 0.4 0.6 P(Male|Cj) 0.1 0.2 P(BlueEyes|Cj) 0.3 Computer Science Department CS 9633 Machine Learning
32
Estimating Probabilities (Classifying Executables)
Two Classes (Malicious, Benign) Features a GUI present (yes/no) a Deletes files (yes/no) a Allocates memory (yes/no) a Length (< 1K, 1-10 K, > 10K) Computer Science Department CS 9633 Machine Learning
33
Instance a1 a2 a3 a4 Class 1 Yes No B 2 3 M 4 5 6 7 8 9 10
34
Classify the Following Instance
<Yes, No, Yes, Yes> Computer Science Department CS 9633 Machine Learning
35
Estimating Probabilities
To estimate P(C|D) Let n be the number of training examples labeled D Let nc be the number labeled D that are also labeled C P(C|D) was estimated as nc/n Problems This is a biased underestimate of the probability When the term is 0, it dominates all others Computer Science Department CS 9633 Machine Learning
36
Use m-estimate of probability
p is prior of what we are trying to estimate (often assume attribute values equally probable) m is a constant (called equivalent sample size) view this augmenting with a virtual sample Computer Science Department CS 9633 Machine Learning
37
Computer Science Department
Repeat Estimates Use equal priors for attribute values Use m value of 1 Computer Science Department CS 9633 Machine Learning
38
Bayesian Belief Networks
Naïve Bayes is based on assumption of conditional independence Bayesian networks provide a tractable method for specifying dependencies among variables Computer Science Department CS 9633 Machine Learning
39
Computer Science Department
Terminology A Bayesian Belief Network describes the probability distribution over a set of random variables Y1, Y2, …Yn Each variable Yi can take on the set of values V(Yi) The joint space of the set of variables Y is the cross product V(Y1) V(Y2) … V(Yn) Each item in the joint space corresponds to one possible assignment of values to the tuple of variables <Y1, …Yn> Joint probability distribution: specifies the probabilities of the items in the joint space A Bayesian Network provides a way to describe the joint probability distribution in a compact manner. Computer Science Department CS 9633 Machine Learning
40
Conditional Independence
Let X, Y, and Z be three discrete-valued random variables. We say that X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given a value for Z Computer Science Department CS 9633 Machine Learning
41
Bayesian Belief Network
A set of random variables makes up the nodes of the network A set of directed links or arrows connects pairs of nodes. The intuitive meaning of an arrow from X to Y is that X has a direct influence on Y. Each node has a conditional probability table that quantifies the effects that the parents have on the node. The parents of a node are all those nodes that have arrows pointing to it. The graph has no directed cycles (it is a DAG) Computer Science Department CS 9633 Machine Learning
42
Example (from Judea Pearl)
You have a new burglar alarm installed at home. It is fairly reliable at detecting a burglary, but also responds on occasion to minor earthquakes. You also have two neighbors, John and Mary, who have promised to call you at work when they hear the alarm. John always calls when he hears the alarm, but sometimes confuses the telephone ringing with the alarm and calls then, too. Mary, on the other hand, likes rather loud music and sometimes misses the alarm altogether. Given the evidence of who has or has not called, we would like to estimate the probability of a burglary. Computer Science Department CS 9633 Machine Learning
43
Computer Science Department
Step 1 Determine what the propositional (random) variables should be Determine causal (or another type of influence) relationships and develop the topology of the network Computer Science Department CS 9633 Machine Learning
44
Topology of Belief Network
Burglary Earthquake Alarm JohnCalls MaryCalls Computer Science Department CS 9633 Machine Learning
45
Computer Science Department
Step 2 Specify a conditional probability table or CPT for each node. Each row in the table contains the conditional probability of each node value for a conditioning case (possible combinations of values for parent nodes). In the example, the possible values for each node are true/false. The sum of the probabilities for each value of a node given a particular conditioning case is 1. Computer Science Department CS 9633 Machine Learning
46
Example: CPT for Alarm Node
P(Alarm|Burglary,Earthquake) True False Burglary Earthquake True True True False False True False False Computer Science Department CS 9633 Machine Learning
47
Complete Belief Network
P(B) 0.001 P(E) 0.002 Burglary Earthquake B E P(A|B,E) T T T F F T F F Alarm JohnCalls A P(J|A) T F A P(M|A) T F MaryCalls Computer Science Department CS 9633 Machine Learning
48
Semantics of Belief Networks
View 1: A belief network is a representation of the joint probability distribution (“joint”) of a domain. The joint completely specifies an agent’s probability assignments to all propositions in the domain (both simple and complex.) Computer Science Department CS 9633 Machine Learning
49
Network as representation of joint
A generic entry in the joint probability distribution is the probability of a conjunction of particular assignments to each variable, such as: Each entry in the joint is represented by the product of appropriate elements of the CPTs in the belief network. Computer Science Department CS 9633 Machine Learning
50
Computer Science Department
Example Calculation Calculate the probability of the event that the alarm has sounded but neither a burglary nor an earthquake has occurred, and both John and Mary call. P(J ^ M ^ A ^ ~B ^ ~E) = P(J|A) P(M|A) P(A|~B,~E) P(~B) P(~E) = 0.90 * 0.70 * * * 0.998 = Computer Science Department CS 9633 Machine Learning
51
Computer Science Department
Semantics View 2: Encoding of a collection of conditional independence statements. JohnCalls is conditionally independent of other variables in the network given the value of Alarm This view is useful for understanding inference procedures for the networks. Computer Science Department CS 9633 Machine Learning
52
Inference Methods for Bayesian Networks
We may want to infer the value of some target variable (Burglary) given observed values for other variables. What we generally want is the probability distribution Inference straightforward if all other values in network known More general case, if we know a subset of the values of variables, we can infer a probability distribution over other variables. NP-Hard problem But approximations work well Computer Science Department CS 9633 Machine Learning
53
Learning Bayesian Belief Networks
Focus of a great deal of research Several situations of varying complexity Network structure may be given or not All variables may be observable or you may have some variables that cannot be observed If the network structure is known and all variables can be observed, the CPT’s can be computed like they were for Naïve Bayes Computer Science Department CS 9633 Machine Learning
54
Gradient Ascent Training of Bayesian Networks
Method developed by Russell Maximizes P(D|h) by following the gradient of ln P(D|h) Let wijk be a single entry in CPT table that variable Yi will take on value yij given that its immediate parent is Ui takes on values given by uik Computer Science Department CS 9633 Machine Learning
55
Computer Science Department
Illustration Ui=uik wijk= P(Yi=yij|Ui=uik) Yi = yij Computer Science Department CS 9633 Machine Learning
56
Computer Science Department
Result Computer Science Department CS 9633 Machine Learning
57
Computer Science Department
Example Burglary Earthquake To compute P(A|B,E) we would need P(A,B,E|d) for each training example Alarm JohnCalls MaryCalls Computer Science Department CS 9633 Machine Learning
58
Computer Science Department
EM Algorithm The EM algorithm is a general purpose algorithm that is used in many settings including Unsupervised learning Learning CPT’s for Bayesian networks Learning Hidden Markov models Two-step algorithm for learning hidden variables Computer Science Department CS 9633 Machine Learning
59
Computer Science Department
Two Step Process For a specific problem with have three quantities X observed data for instances Z unobserved data for instances (this is usually what we are trying to learn) Y full data General approach Determine initial hypothesis for values for Z Step 1: Estimation Compute a function Q(h’|h) using current hypothesis h and the observed data X to estimate the probability distribution over Y. Step 2: Maximization Revise hypothesis h with h’ that maximizes the Q function Computer Science Department CS 9633 Machine Learning
60
Computer Science Department
K-means algorithm Assume that data comes from 2 Gaussian distributions. Means () are unknown P(x) x Computer Science Department CS 9633 Machine Learning
61
Computer Science Department
Generation of data Select one of the normal distributions at random Generate a single random instance xi using this distribution Computer Science Department CS 9633 Machine Learning
62
Example Select initial values for h
2 X 1 Y Computer Science Department CS 9633 Machine Learning
63
E-step: Compute the probability that datum xi generated by component i
2 X 1 Y Computer Science Department CS 9633 Machine Learning
64
M-step: Replace hypothesis h with h’ that maximizes Q
1’ X 2’ Y Computer Science Department CS 9633 Machine Learning
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.