Download presentation
Presentation is loading. Please wait.
Published byAmelia Ross Modified over 9 years ago
1
Artificial Intelligence 8. Supervised and unsupervised learning Japan Advanced Institute of Science and Technology (JAIST) Yoshimasa Tsuruoka
2
Outline Supervised learning Naive Bayes classifier Unsupervised learning Clustering Lecture slides http://www.jaist.ac.jp/~tsuruoka/lectures/
3
Supervised and unsupervised learning Supervised learning – Each instance is assigned with a label – Classification, regression – Training data need to be created manually Unsupervised learning – Each instance is just a vector of attribute-values – Clustering – Pattern mining
4
Naive Bayes classifier Chapter 6.9 of Mitchell, T., Machine Learning (1997) Naive Bayes classifier – Output probabilities – Easy to implement – Assumes conditional independence between features – Efficient learning and classification
5
Thomas Bayes (1702 – 1761) The reverse conditional probability can be calculated using the original conditional probability and prior probabilities. Bayes’ theorem
6
Can we know the probability of having cancer from the result of a medical test? Bayes’ theorem
7
The probability of actually having cancer is not very high. Bayes’ theorem
8
Naive Bayes classifier Assume that features are conditionally independent Bayes’ theorem The denominator is constant. Conditional independence
9
Training data DayOutlookTemperatureHumidityWindPlayTennis D1SunnyHotHighWeakNo D2SunnyHotHighStrongNo D3OvercastHotHighWeakYes D4RainMildHighWeakYes D5RainCoolNormalWeakYes D6RainCoolNormalStrongNo D7OvercastCoolNormalStrongYes D8SunnyMildHighWeakNo D9SunnyCoolNormalWeakYes D10RainMildNormalWeakYes D11SunnyMildNormalStrongYes D12OvercastMildHighStrongYes D13OvercastHotNormalWeakYes D14RainMildHighStrongNo
10
Naive Bayes classifier Instance
11
Class prior probability Maximum likelihood estimation – Just counting the number of occurrences in the training data
12
Conditional probabilities of features Maximum likelihood
13
Class posterior probabilities Normalize
14
Smoothing Maximum likelihood estimation – Estimated probabilities are not reliable when n c is small m-estimate of probability : prior probability : equivalent sample size
15
Text classification with a Naive Bayes classifier Text classification – Automatic classification of news articles – Spam filtering – Sentiment analysis of product reviews – etc.
16
There were doors all round the hall, but they were all locked; and when Alice had been all the way down one side and up the other, trying every door, she walked sadly down the middle, wondering how she was ever to get out again.
17
Cannot be estimated reliably Ignore the position and apply m-estimate smoothing Conditional probabilities of words The probability of the second word of the document being the word “were”
18
Unsupervised learning No “correct” output for each instance Clustering – Merging “similar” instances into a group – Hierarchical clustering, k-means, etc.. Pattern mining – Discovering frequent patterns from a large amount of data – Association rules, graph mining, etc
19
Clustering Organize instances into groups whose members are similar in some way
20
Agglomerative clustering Define a distance between every pair of instances – E.g. cosine similarity Algorithm 1.Start with every instance representing a singleton cluster 2.The closest two clusters are merged into a single cluster 3.Repeat this process until all clusters are merged
21
Agglomerative clustering Example Dendrogram 1 2 3 4 5 1 2 3 4 5
22
Defining a distance between clusters single linkcomplete link group-averagecentroid
23
k-means algorithm Centroids Minimize Algorithm 1.Choose k centroids c 1,…c k randomely 2.Assign each instance to the cluster with the closest centroid 3.Update the centroids and go back to Step 2
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.