Download presentation
Presentation is loading. Please wait.
Published byBonnie Barker Modified over 9 years ago
1
A survey on using Bayes reasoning in Data Mining Directed by : Dr Rahgozar Mostafa Haghir Chehreghani
2
Outline Bayes Theorem MAP, ML hypothesis Minimum description length principle Bayes optimal classifier Naïve Bayes learner summery
3
Two Roles for Bayesian Methods Provides practical learning algorithms Naïve Bayes learning Bayesian belief network learning Combine prior knowledge with observed data Requires prior probabilities Provides useful conceptual framework Provides gold standard for evaluating other learning algorithms Additional insight into Ockham’s razor
4
Bayes Theorem P( h ) = prior probability of hypothesis h P( D ) = prior probability of training data D P( h|D ) = probability of h given D P( D|h ) = probability of D given h
5
Choosing hypotheses Generally want the most probable hypothesis given the training data Maximum a posteriori hypothesis h MAP If assume P( h i )=P( h j ) then can further simplify, and choose the Maximum likelihood (ML) hypothesis
6
Why Likelihood Function are Great MLEs achieve the Cramer-Rao lower bound The CRLB of variance is the inverse of the derivative of the derivative of the log of the likelihood function. Any estimator β must have a variance greater than or equal to the CRLB The Neyman-Pearson lemma a likelihood ratio test will have the minimum possible Type II error of any test with the α that we selected.
7
Learning a Real Valued Function Consider any real-valued target function f Training examples, where d i is noisy training value d i =f(x i )+e i e i is random variable (noise) drawn independently for each x i according to some Gaussian distribution with µ=0 Then the maximum likelihood hypothesis h ML is the one that minimizes the sum of squared errors
8
Learning a Real Valued Function Maximize natural log of this instead…
9
Learning to Predict Probabilities Consider predicting survival probability from patient data Training examples, where d i is 1 or 0 Want to train neural network to output a probability given x i (not a 0 or 1) In this case can show Weight update rule for a sigmoid unit: where
10
Minimum Description Length Principle Ockham’s razor: prefer the shortest hypothesis MDL: prefer the hypothesis h that minimizes where L c (x) is the description length of x under encoding C Example: H=decision trees D=training data labels L c1 (h) is # bits to describe tree h L c2 (x) is # bits to describe D given h Hence h MDL trades off tree size for training errors
11
Minimum Description Length Principle Interesting fact from information theory: The optimal (shortest expected coding length) code for an event with probability p is –log 2 p bits So interpret (1): –log 2 P(h) is length of h under optimal code –log 2 P(D|h) is length of D given h under optimal code
12
So far we’ve sought the most probable hypothesis given the data D (ie., h MAP ) Given new instance x what is its most probable classification ? h MAP is not the most probable classification! Consider three possible hypotheses: Given a new instance x What’s the most probable classification of x ? Most Probable Classification of New Instances
13
Example: therefore and Bayes Optimal Classifier
14
Gibbs Classifier Bayes optimal classier provides best result, but can be expensive if many hypotheses Gibbs algorithm: 1. Choose one hypothesis at random, according to P(h|D) 2. Use this to classify new instance Surprising fact: Assume target concepts are drawn at random from H according to priors on H. Then:
15
Naive Bayes Classifier Along with decision trees, neural networks, nearest nbr, one of the most practical learning methods When to use Moderate or large training set available Attributes that describe instances are conditionally independent given classification Successful applications: Diagnosis Classifying text documents
16
Naive Bayes Classifier Assume target function f:X→V, where each instance x described by attributes. Most probable Value of f(x) is: Naïve Bayes assumption Which gives
17
Conditional Independence Definition: X is conditionally independent of Y given Z if the probability distribution governing X is independent of the value of Y given the value of Z; that is if more compactly we write Example: Thunder is conditionally independent of Rain, given Lightning
18
Inference in Bayesian Networks How can one infer the (probabilities of) values of one or more network variables given observed values of others? Bayes net contains all information needed for this inference If only one variable with unknown value, easy to infer it In general case, problem is NP hard In practice, can succeed in many cases Exact inference methods work well for some network structures Monte Carlo methods simulate the network randomly to calculate approximate solutions
19
Learning Bayes Nets Suppose structure known, variables partially observable e.g. observe ForestFire, Storm, BusTourGroup, Thunder, but not Lightning, Campfire,… Similar to training neural network with hidden units In fact, can learn network conditional probability tables using gradient ascent! Converge to network h that (locally) maximizes P(D|h)
20
More on Learning Bayes Nets EM algorithm can also be used. Repeatedly: 1. Calculate probabilities of unobserved variables, assuming h 2. Calculate new w ijk to maximize E[lnP(D|h)] where D now includes both observed and (calculated probabilities of) unobserved variables When structure unknown… Algorithms use greedy search to add/substract edges and nodes Active research topic
21
Summary Combine prior knowledge with observed data Impact of prior knowledge (when correct!) is to lower the sample complexity Active research area Extend from boolean to real-valued variables Parameterized distributions instead of tables Extend to first-order instead of propositional systems More effective inference methods …
22
Reference: Buntine W. L, (1994). Operations for learning with graphical models. Journal of Artificial Intelligence Research, 2, 159-225. Buntine W. L, (1994). Operations for learning with graphical models. Journal of Artificial Intelligence Research, 2, 159-225. R. Agrawal, J. Gehrke, D. Gunopolos, and Prabhakar Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In ACM SIGMOD Conference, 1998. R. Agrawal, J. Gehrke, D. Gunopolos, and Prabhakar Raghavan. Automatic subspace clustering of high dimensional data for data mining applications. In ACM SIGMOD Conference, 1998. LEE, C-Y. and ANTONSSON, E.K. 2000. Dynamic partitional clustering using evolution strategies. In Proceedings of the 3rd Asia-Pacific Conference on Simulated Evolution and Learning, Nagoya, Japan. LEE, C-Y. and ANTONSSON, E.K. 2000. Dynamic partitional clustering using evolution strategies. In Proceedings of the 3rd Asia-Pacific Conference on Simulated Evolution and Learning, Nagoya, Japan. PELLEG, D. and MOORE, A. 2000. X-means: Extending K- means with Efficient Estimation of the Number of Clusters. In Proceedings 17th ICML, Stanford University. PELLEG, D. and MOORE, A. 2000. X-means: Extending K- means with Efficient Estimation of the Number of Clusters. In Proceedings 17th ICML, Stanford University.
23
Reference: Cédric Archambeau, John A. Lee, Michel Verleysen. On Convergence Problems of the EM Algorithm for Finite Gaussian Mixtures. ESANN'2003 proceedings – European Symposium on Artificial Neural Networks Bruges (Belgium), 23-25 April 2003, d- side publi., ISBN 2-930307-03-X, pp. 99-106 Cédric Archambeau, John A. Lee, Michel Verleysen. On Convergence Problems of the EM Algorithm for Finite Gaussian Mixtures. ESANN'2003 proceedings – European Symposium on Artificial Neural Networks Bruges (Belgium), 23-25 April 2003, d- side publi., ISBN 2-930307-03-X, pp. 99-106 P. Langley and S. Sage. Induction of Selective Bayesian Classifiers. Proc. 10th Conf. on Artificial Intelligence, 1994 P. Langley and S. Sage. Induction of Selective Bayesian Classifiers. Proc. 10th Conf. on Artificial Intelligence, 1994 J. Bilmes: A Gentle Tutorial of the EM Algorithm and its Application to Parameter stimation for Gaussian Mixture and Hidden Markov Models. Technical Report of the International Computer Science Institute, Berkeley, CA (1998). J. Bilmes: A Gentle Tutorial of the EM Algorithm and its Application to Parameter stimation for Gaussian Mixture and Hidden Markov Models. Technical Report of the International Computer Science Institute, Berkeley, CA (1998). Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, the University of Pennsylvania. Adwait Ratnaparkhi. 1998. Maximum Entropy Models for Natural Language Ambiguity Resolution. Ph.D. thesis, the University of Pennsylvania. [17] Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proc. 16th International Conf. on Machine Learning, pages 200–209. Morgan Kaufmann, San Francisco, CA. [17] Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In Proc. 16th International Conf. on Machine Learning, pages 200–209. Morgan Kaufmann, San Francisco, CA.
24
Any Question ?
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.