Download presentation
Presentation is loading. Please wait.
Published byTyrone Trathen Modified over 10 years ago
1
K-NEAREST NEIGHBORS AND DECISION TREE Nonparametric Supervised Learning
2
Outline Context of these algorithms K-nearest neighbors (k-NN) 1-nearest neighbor (1-NN Extension to k-nearest neighbors Decision Tree Summary
3
Outline Context of these algorithms K-nearest neighbors (k-NN) 1-nearest neighbor (1-NN Extension to k-nearest neighbors Decision Tree Summary
4
Context of these Algorithms Supervised Learning: labeled training samples Nonparametric: mathematical representation of the underlying probability distribution is hard to obtain Figure: various approaches in statistical pattern recognition
5
K-Nearest Neighbors Implicitly constructs decision boundaries Used for both classification and regression Figure: various approaches in statistical pattern recognition
6
Decision Tree Explicitly constructs decision boundaries Used for classification Figure: various approaches in statistical pattern recognition
7
Outline Context of these algorithms K-nearest neighbors (k-NN) 1-nearest neighbor (1-NN Extension to k-nearest neighbors Decision Tree Summary
8
K-Nearest Neighbors Goal: Classify an unknown training sample into one of C classes (can also be used for regression) Idea: To determine the label of an unknown sample (x), look at x’s k-nearest neighbors Image from MIT Opencourseware
9
Notation Training samples: x is the feature vector with d features y is a class label of {1,2,…C} Goal: determine for
10
Decision Boundaries Implicitly found Shown using a Voronoi Diagram
11
1-Nearest Neighbor (1-NN) Consider k = 1 1-NN algorithm Step 1: Find the closest to with respect to Euclidean distance Find that minimizes Step 2: Choose to be
12
Extensions of 1-Nearest Neighbor How many neighbors to consider? 1 for 1-NN vs. k for k-NN What distance to use? Euclidean, L 1 -norm, etc. How to combine neighbors’ labels? Majority vote vs. weighted majority vote
13
How many neighbors to consider? Noisy decision boundaries Over-smoothed boundaries K Too SmallK Too Large k = 1 k = 7
14
What distance to use? Euclidean distance – treats every feature as equally important Distance needs to be meaningful (1 foot vs. 12 inches) Features could be insignificant Scaled Euclidean distance
15
Distance Metrics
16
How to combine neighbors’ labels? Majority vote: each of k-neighbors’ votes are weighted equally Weighted majority vote: closer neighbors’ votes get more weight Ex. Weight w i = 1/distance 2 (if distance = 0, that sample gets 100% of vote) Note: for regression,
17
Pros and Cons of k-NN Simple Good results Easy to add new training examples Computationally expensive To determine nearest neighbor, visit each training samples O(nd) n = number of training samples d = dimensions ProsCons
18
Outline Context of these algorithms K-nearest neighbors (k-NN) 1-nearest neighbor (1-NN Extension to k-nearest neighbors Decision Tree Summary
19
Decision Tree Goal: Classify an unknown training sample into one of C classes Idea: Set thresholds for a sequence of features to make a classification decision
20
Definitions Decision node: if-then decision based on features of testing sample Root node: the first decision node Leaf node: has a class label Figure: an example of a simple decision tree
21
Decision Boundaries Explicitly defined
22
Creating Optimal Decision Trees Classification and Regression Trees (CART) by Brieman et al. is one method to produce decision trees Creates binary decision trees – trees where each decision node has exactly two branches Recursively split the feature space into a set of non- overlapping regions
23
Creating Optimal Decision Trees Need to choose decisions that best partition the feature space Choose the split s at node t that maximizes Terms: = left child at node t = # records at t L / # records in training set = # records of class j at t L / # records at t
24
Creating Optimal Decision Trees C4.5 by Quinlan is another algorithm for creating trees Creates trees based on optimal splits Trees are not required to be binary
25
Creating Optimal Decision Trees Splits based on entropy Suppose variable X has k possible values p i = n i /n = estimated probability X has value i Entropy: Candidate split S partitions training set T into subsets T 1, T 2, …T k Entropy is the weighted sum entropies at each subset
26
Creating Optimal Decision Trees Information gain: C4.5 selects the candidate split S that creates which maximizes the information gain
27
Pros and Cons of Decision Trees Simple to understand Little data preparation required Results are easy to follow Robust Handles large datasets well Practical algorithms based on heuristics which may not give globally-optimal trees Require pruning to avoid over-fitting data ProsCons
28
Outline Context of these algorithms K-nearest neighbors (k-NN) 1-nearest neighbor (1-NN) Extension to k-nearest neighbors Decision Tree Summary
29
Summary Compare a new data point to similar labeled data points Implicitly define the decision boundaries Easy, but computationally expensive Use thresholds of feature values to determine classification Explicitly define decision boundaries Simple, but hard to find globally-optimal trees K-Nearest NeighborDecision Tree
30
Sources on k-NN “A study on classification techniques in data mining”. Kesavaraj, G.; Sukumaran, S.. Published in ICCCNT. MIT Opencourseware, 15.097 Spring 2012. Credit: Seyda Ertekin. http://ocw.mit.edu/courses/sloan-school-of-management/15-097-prediction-machine- learning-and-statistics-spring-2012/lecture-notes/MIT15_097S12_lec06.pdf Oregon State – Machine Learning Course by Xiaoli Fern http://classes.engr.oregonstate.edu/eecs/spring2012/cs534/notes/knn.pdf Machine Learning Course by Rita Osadchy http://www.cs.haifa.ac.il/~rita/ml_course/lectures/KNN.pdf University of Wisconsin - Machine Learning Course by Xiaojin Zhu http://www.cs.sun.ac.za/~kroon/courses/machine_learning/lecture2/kNN- intro_to_ML.pdf UC Irvine – Intro to Artificial Intelligence Course by Richard Lathrop http://www.ics.uci.edu/~rickl/courses/cs-171/2014-wq-cs171/2014-wq-cs171-lecture- slides/2014wq171-19-LearnClassifiers.pdf
31
Sources on Decision Tree University of Wisconsin- Machine Learning Course by Jerry Zhu http://pages.cs.wisc.edu/~jerryzhu/cs540/handouts/dt.pdf Discovering Knowledge in Data: An Introduction to Data Mining. Larose, Daniel, T. (2005) “Statistical Pattern Recognition: A Review” Jain, Anil. K; Duin, Robert. P.W.; Mao, Jianchang (2000). “Statistical pattern recognition: a review”. IEEE Transtactions on Pattern Analysis and Machine Intelligence 22 (1): 4-37
32
Any questions? Thank you
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.