Download presentation
Presentation is loading. Please wait.
1
Instance-Based Learners So far, the learning methods that we have seen all create a model based on a given representation and a training set. Once the model has been created the training set no longer used in classifying unseen instances. Instance-Based Learners are different in that they do not have a training phase. They also do not create a model. Instead, they use the training set each time an instance must be calculated.
2
Instance-Based Learners Although there are a number of types of instance-based approach, two simple (but effective) methods are: –K-Nearest Neighbor Discrete Target Functions Continuous Target Functions Distance Weighted –General Regression Neural Networks
3
Instance-Based Learners: K-Nearest Neighbor (Discrete) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Simply set d q = the most common d i in Neighborhood.
4
Instance-Based Learners: K-Nearest Neighbor (Continuous) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Simply set d q = (Σ i d c[i] )/k.
5
Instance-Based Learners: K-Nearest Neighbor (Distance Weighted) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Let wi = d(q,c[i]) -b Set d q = (Σ i=1 k w i d c[i] )/(Σ i w i ) –k < n (Local Method) –k = n (Global Method [Shepard’s Method])
6
Instance-Based Methods: General Regression Neural Networks (GRNNs) GRNNs are global methods that consist of: –A hidden layer of Gaussian neurons (one neuron for each t i ) –A set of weights w i, where w i = d i –A set of standard deviations, σ i for each training instance i d q = f(t q ) = (Σhf i (t q,t i )d i ) / Σhf i (t q,t i ) hf i (t q,t i ) = exp(- (||t q - t i || 2 )/2σ i 2 )
7
Instance-Based Learning: General Regression Neural Networks (GRNNs)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.