Instance-Based Learners So far, the learning methods that we have seen all create a model based on a given representation and a training set. Once the.

Slides:



Advertisements
Similar presentations
Data Mining Classification: Alternative Techniques
Advertisements

Data Mining Classification: Alternative Techniques
Salvatore giorgi Ece 8110 machine learning 5/12/2014
1 CS 391L: Machine Learning: Instance Based Learning Raymond J. Mooney University of Texas at Austin.
R OBERTO B ATTITI, M AURO B RUNATO. The LION Way: Machine Learning plus Intelligent Optimization. LIONlab, University of Trento, Italy, Feb 2014.
Instance Based Learning
1 Machine Learning: Lecture 7 Instance-Based Learning (IBL) (Based on Chapter 8 of Mitchell T.., Machine Learning, 1997)
Lazy vs. Eager Learning Lazy vs. eager learning
1er. Escuela Red ProTIC - Tandil, de Abril, Instance-Based Learning 4.1 Introduction Instance-Based Learning: Local approximation to the.
ECE 8527 Homework Final: Common Evaluations By Andrew Powell.
Instance Based Learning
Competitive Networks. Outline Hamming Network.
K nearest neighbor and Rocchio algorithm
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Nearest Neighbor Models (Reading: Chapter.
Instance Based Learning
Practical Reinforcement Learning in Continuous Space William D. Smart Brown University Leslie Pack Kaelbling MIT Presented by: David LeRoux.
Instance-Based Learning
Instance Based Learning. Nearest Neighbor Remember all your data When someone asks a question –Find the nearest old data point –Return the answer associated.
Comparison of Instance-Based Techniques for Learning to Predict Changes in Stock Prices iCML Conference December 10, 2003 Presented by: David LeRoux.
Data Mining Classification: Alternative Techniques
Lecture outline Classification Naïve Bayes classifier Nearest-neighbor classifier.
Aprendizagem baseada em instâncias (K vizinhos mais próximos)
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Instance Based Learning Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán) 1.
INSTANCE-BASE LEARNING
October 7, 2010Neural Networks Lecture 10: Setting Backpropagation Parameters 1 Creating Data Representations On the other hand, sets of orthogonal vectors.
CS Instance Based Learning1 Instance Based Learning.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
K Nearest Neighborhood (KNNs)
Sehoon Ha John Turgeson Karthik Raveendran.
Artificial Neural Networks Dr. Abdul Basit Siddiqui Assistant Professor FURC.
1 Data Mining Lecture 5: KNN and Bayes Classifiers.
K Nearest Neighbors Saed Sayad 1www.ismartsoft.com.
An informal description of artificial neural networks John MacCormick.
 2003, G.Tecuci, Learning Agents Laboratory 1 Learning Agents Laboratory Computer Science Department George Mason University Prof. Gheorghe Tecuci 9 Instance-Based.
1 Instance Based Learning Ata Kaban The University of Birmingham.
Linear Discrimination Reading: Chapter 2 of textbook.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
CpSc 881: Machine Learning Instance Based Learning.
CpSc 810: Machine Learning Instance Based Learning.
Outline K-Nearest Neighbor algorithm Fuzzy Set theory Classifier Accuracy Measures.
Lazy Learners K-Nearest Neighbor algorithm Fuzzy Set theory Classifier Accuracy Measures.
K nearest neighbors algorithm Parallelization on Cuda PROF. VELJKO MILUTINOVIĆ MAŠA KNEŽEVIĆ 3037/2015.
CS Machine Learning Instance Based Learning (Adapted from various sources)
K-Nearest Neighbor Learning.
1 Learning Bias & Clustering Louis Oliphant CS based on slides by Burr H. Settles.
Eick: kNN kNN: A Non-parametric Classification and Prediction Technique Goals of this set of transparencies: 1.Introduce kNN---a popular non-parameric.
Debrup Chakraborty Non Parametric Methods Pattern Recognition and Machine Learning.
Kansas State University Department of Computing and Information Sciences CIS 890: Special Topics in Intelligent Systems Wednesday, November 15, 2000 Cecil.
CS 8751 ML & KDDInstance Based Learning1 k-Nearest Neighbor Locally weighted regression Radial basis functions Case-based reasoning Lazy and eager learning.
1 Instance Based Learning Soongsil University Intelligent Systems Lab.
Outline Time series prediction Find k-nearest neighbors Lag selection Weighted LS-SVM.
Linear Models & Clustering Presented by Kwak, Nam-ju 1.
Instance Based Learning
A Support Vector Machine Approach to Sonar Classification
Self organizing networks
Instance Based Learning (Adapted from various sources)
Prepared by: Mahmoud Rafeek Al-Farra
یادگیری بر پایه نمونه Instance Based Learning Instructor : Saeed Shiry
Instance Based Learning
Word Embedding Word2Vec.
Chap 8. Instance Based Learning
Classification Boundaries
Machine Learning: UNIT-4 CHAPTER-1
CIS 519 Recitation 11/15/18.
Artificial Neural Networks
Presentation transcript:

Instance-Based Learners So far, the learning methods that we have seen all create a model based on a given representation and a training set. Once the model has been created the training set no longer used in classifying unseen instances. Instance-Based Learners are different in that they do not have a training phase. They also do not create a model. Instead, they use the training set each time an instance must be calculated.

Instance-Based Learners Although there are a number of types of instance-based approach, two simple (but effective) methods are: –K-Nearest Neighbor Discrete Target Functions Continuous Target Functions Distance Weighted –General Regression Neural Networks

Instance-Based Learners: K-Nearest Neighbor (Discrete) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Simply set d q = the most common d i in Neighborhood.

Instance-Based Learners: K-Nearest Neighbor (Continuous) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Simply set d q = (Σ i d c[i] )/k.

Instance-Based Learners: K-Nearest Neighbor (Distance Weighted) Given a training set of the form {(t 1,d 1 ), (t 2,d 2 ), …, (t n,d n )} Let t q represent an instance to be classified as d q Let Neighborhood = {(t c[1], d c[1] ), (t c[2], d c[2] ), …, (t c[k], d c[k] )}, represent the set of k training instances closest to instance q. Where c is an array of the indexes of the closest instances to q using a distance function d(q,i), that returns the distance between t q and t i. Let wi = d(q,c[i]) -b Set d q = (Σ i=1 k w i d c[i] )/(Σ i w i ) –k < n (Local Method) –k = n (Global Method [Shepard’s Method])

Instance-Based Methods: General Regression Neural Networks (GRNNs) GRNNs are global methods that consist of: –A hidden layer of Gaussian neurons (one neuron for each t i ) –A set of weights w i, where w i = d i –A set of standard deviations, σ i for each training instance i d q = f(t q ) = (Σhf i (t q,t i )d i ) / Σhf i (t q,t i ) hf i (t q,t i ) = exp(- (||t q - t i || 2 )/2σ i 2 )

Instance-Based Learning: General Regression Neural Networks (GRNNs)