Download presentation
Presentation is loading. Please wait.
1
Chap 8. Instance Based Learning
인공지능 연구실 신 동 호
2
Abstracts Learning method Nearest neighbor Locally weighted regression
Store the training examples v.s construct a general target function Nearest neighbor Locally weighted regression Radial basis functions Case-based reasoning Lazy learning method Local estimation cf) target function for the entire instance space
3
INTRODUCTION Learning : storing the presented training data
Local approximation complex target function is constructed by less complex local target functions Disadvantage the cost of classifying new instances can be high
4
Basic Schema 사람 - Concept : attribute-value pair 개별 인간 - Instance
백인, 황인, 흑인 - prototypes : same as instance
5
K-Nearest Neighbor Learning
Instance points in the n-dimensional space feature vector <a1(x), a2(x),...,an(x)> distance target function : discrete or real value
6
Classification algorithm:
Training algorithm: For each training example (x,f(x)), add the example to the list training_examples Classification algorithm: Given a query instance xq to be classified, Lex x1...xk denote the k instances from training_examples that are nearest to xq Return
7
Distance-Weighted N-N Algorithm
Giving greater weight to closer neighbors discrete case real case
8
Remarks on k-N-N Algorithm
Robust to noisy training data Effective in sufficiently large set of training data Subset of instance attributes Dominated by irrelevant attributes weight each attribute differently Indexing the stored training examples kd-tree
9
Locally Weighted Regression
the function is approximated based only on data near the query point. Weighted the contribution of each training example is weighted by its distance from the query point. Regression approximating real-valued function
10
Locally Weighted Linear Regression
where ai(x) : ith attribute of the instance x minimize the squared error sum where D is training set where is a learning rate
11
A Local Approximation 1. minimize the squared error over k nearest neighbor 2. minimize the squared error over entire set D, with weights 3. combine 1,2 with training rule
12
Radial Basis Functions
Distance weighted regression and ANN where xu : instance from X Ku(d(xu,x)) : kernel function The contribution from each of the Ku(d(xu,x)) terms is localized to a region nearby the point xu : Gaussian Function Corresponding two layer network first layer : computes the values of the various Ku(d(xu,x)) second layer : computes a linear combination of first-layer unit values.
13
RBF network Training construct kernel function adjust weights RBF networks provide a global approximation to the target function, represented by a linear combination of many local kernel functions.
14
Case-Based Reasoning CBR lazy learning
classify new query instances by similar instances symbolic descriptions (not n-dimensional space)
15
CADET system
16
Cont’d Library : functional description cases Search :
matching the design problem various subgraphs rewrite rule by general knowledge merging problem Target function f : maps function graphs to the structures that implement them
17
Remarks on Lazy and Eager Learning
Lazy Learning : generalization at query time k-N-N locally weighted regression case-based reasoning Eager Learning : generalization at training time radial basis function Back-Propagation
18
Differences Computation time Classifications produced for new queries.
train: eager > lazy query: eager < lazy Classifications produced for new queries. Target function eager: a single linear function that covers the entire instance space lazy: a combination of many local approximations
19
Summary Instance-based learning k-N-N search
form a different local approximation for each query instance k-N-N search target function is estimated from known k-N-N Locally weighted regression an explicit local approximation to the target function RBF ANN constructed from localized kernel functions CBR instances represented by logical descriptions
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.