Download presentation
Presentation is loading. Please wait.
1
CS292 Computational Vision and Language Pattern Recognition and Classification
2
Classification and Recognition
3
Pattern Recognition This section gives a brief survey of methods used to recognise objects. These methods apply to the recognition of objects in images, and are applicable to any other kind of data as well. The basic approach views an instance to be recognised as a vector of measurements. Introduction to some simple methods whereby a machine can learn to recognise objects by being taught from samples. Together with previous sessions, we should understand the design of some complete machine vision systems and should be able to experiment with building a complete set of algorithms for some simple, yet real problem.
4
Pattern Recognition Problems In many practical problems, there is a need to make some decision about the content of an image or about the classification of an object that it contains. e.g.,handprinted character recognition; a food market recognition system; http://domino.research.ibm.com/comm/wwwr_thinkresea rch.nsf/pages/machine399.html
5
Common Model for Classification Classes Feature extractor Classifier
6
Classes There is a set of m known classes of objects. These are known either by some description or by having a set of examples for each of the classes. An ideal class is a set of objects having some important common properties; in practice, a class to which an object belongs is denoted by some class label. Classification is a process that assigns a label to an object according to some representation of the object’s properties. A classifier is a device or algorithm that inputs an object representation and output a class label. A reject class is a generic class for objects that cannot be placed in any of the designated classes.
7
Feature Extractor The feature extractor extracts information relevant to classification from the data input by the sensor.
8
Figure 1 classification system diagram: discriminant functions f(x, K) perform some computation on input feature vector x using some knowledge K from training and pass results to a final stage that determines the class. x1x1 x2x2 xdxd Input feature vector Distance or probability computations F1(x, K) F2(x,K) Fm(x,K) Compare & decide C(x) Output classification
9
Features Used for Representation A crucial issue for both theory and practice is what representation or encoding of the object is used in the recognition process? Alternatively, what features are important for recognition?
10
Feature Vector Representation Object may be compared for similarity based on their representation as vector of measurements. Suppose each object is represented by exactly d measurements. The ith coordinate of such a feature vector has the same meaning for each object;
11
Feature Vector Representation The similarity, or closeness, between the feature vector representations of two objects can then be described using Euclidean distance between the vectors defined in Equation 1. Sometimes the Euclidean distance between an observed vector and a stored class prototype can provide a useful classification function. Definition: the Euclidean distance between two d- dimensional feature vectors x1 and x2 is ||x 1 -x 2 || = (Equation 1)
12
Figure 2 Two compact classes: classification using nearest mean will yield a low error rate. x x x x x x x x x x x x x xx x x x x x x x x x o o o. o ooooo o o o o class mean x Class mean
13
Classification using the Nearest Class Mean A simple classification algorithm is to summarize the sample data from each class using the class mean vector, or centroid = where x i,j is the jth sample feature vector from class i. An unknown object with feature vector x is classified as class i if it is [much] closer to the mean vector of class i than to any other class mean vector. We have the opportunity to put x into the reject class if it is not close enough to any of the sample means.
14
Classification using the Nearest Class Mean This classification method is simple and fast and will work in some problems where the sample vectors from each class are compact and far from those of the other classes. –A simple two class example with feature vectors of dimension d=2 is shown in figure 2. –Since there are samples of each class that are equidistant from both class centroids, the error rate will not be zero, although we expect it to be very low if the structure of the samples well represents the structure of future sensed objects.
15
Classification using the Nearest Class Mean –We now have one concrete interpretation for the function boxes of Figure 1: The ith function box computes the distance between unknown input x and the mean vector of training samples from that class. The training samples constitute the knowledge K about the class.
16
Problem:- Figure 3 shows three classes with complex structure:- Classification using nearest mean will yield poor result. x x x x x xx xx x x x x x x xxx xxx xx x x x x x x x x x xx o o o. o ooooo o o o Class 2 +++ ++ +++ + + o o ooooo o o o Class 1 Class 3 Class 2 x2x2 x1x1
17
Classification Using the Nearest Neighbours A more flexible but more expensive method of classification is to classify unknown feature vector x into class of individual sample closet to it. This is the nearest-neighbour rule. It can be effective even when classes have complex structure in d-space and when classes overlap. No assumptions need to be made about models for the distribution of feature vectors in space; the algorithm uses only the existing training samples.
18
Classification Using the Nearest Neighbours - algorithm Compute the k nearest neighbours of x and return the majority class S is a set of n labelled class samples s i where s i. x is a feature vector and s i. c is its integer class label. x is the unknown input feature vector to be classified. A is an array capable of holding up to k samples in the sorted order by distance d. The value returned is a class label in the range [1,m] Procedure K_Nearest_Neighbours(x, S) { make A empty; for all samples s i in S { d =Euclidean distance between s i and x; if A has less than k elements then insert (d, s i ) into A; else if d is less than max A then{ remove the max from A; insert (d, s i ) in A; } assert A has k samples from S closest to x; if a majority of the labels s i.c from A are class c 0, then classify x to class c 0 ; else classify x into the reject class return (class_of_x) }
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.