Download presentation
Presentation is loading. Please wait.
Published byRatna Rachman Modified over 5 years ago
1
Classification and clustering - interpreting and exploring data
Peter Fox Data Analytics ITWS-4600/ITWS-6600/MATP-4450/CSCI-4960 Group 2, Module 5, September 18, 2018
3
K-nearest neighbors (knn)
Can be used in both regression and classification (“non-parametric”) Is supervised, i.e. training set and test set KNN is a method for classifying objects based on closest training examples in the feature space. An object is classified by a majority vote of its neighbors. K is always a positive integer. The neighbors are taken from a set of objects for which the correct classification is known. It is usual to use the Euclidean distance, though other distance measures such as the Manhattan distance could in principle be used instead. ~
4
Algorithm The algorithm on how to compute the K-nearest neighbors is as follows: Determine the parameter K = number of nearest neighbors beforehand. This value is all up to you. Calculate the distance between the query-instance and all the training samples. You can use any distance algorithm. Sort the distances for all the training samples and determine the nearest neighbor based on the K-th minimum distance. Since this is supervised learning, get all the categories of your training data for the sorted value which fall under K. Use the majority of nearest neighbors as the prediction value.
5
Distance metrics Euclidean distance is the most common use of distance. When people talk about distance, this is what they are referring to. Euclidean distance, or simply 'distance', examines the root of square differences between the coordinates of a pair of objects. This is most generally known as the Pythagorean theorem. The taxicab metric is also known as rectilinear distance, L1 distance or L1 norm, city block distance, Manhattan distance, or Manhattan length, with the corresponding variations in the name of the geometry. It represents the distance between points in a city road grid. It examines the absolute differences between the coordinates of a pair of objects.
6
More generally The general metric for distance is the Minkowski distance. When lambda is equal to 1, it becomes the city block distance, and when lambda is equal to 2, it becomes the Euclidean distance. The special case is when lambda is equal to infinity (taking a limit), where it is considered as the Chebyshev distance. Chebyshev distance is also called the Maximum value distance, defined on a vector space where the distance between two vectors is the greatest of their differences along any coordinate dimension. In other words, it examines the absolute magnitude of the differences between the coordinates of a pair of objects.
8
Choice of k? Don’t you hate it when the instructions read: the choice of ‘k’ is all up to you ?? Loop over different k, evaluate results…
9
What does “Near” mean… More on this in the next topic but …
DISTANCE – and what does that mean RANGE – acceptable, expected? SHAPE – i.e. the form
10
Training and Testing We are going to do much more on this going forward… Regression (un-supervised) – uses all the data to ‘train’ the model, i.e. calculate coefficients Residuals are differences between actual and model for all data Supervision means not all the data is used to train because you want to test on the untrained set (before you predict for new values) What is the ‘sampling’ strategy for training? (1b)
11
Summing up ‘knn’ Advantages Disadvantages
Robust to noisy training data (especially if we use inverse square of weighted distance as the “distance”) Effective if the training data is large Disadvantages Need to determine value of parameter K (number of nearest neighbors) Distance based learning is not clear which type of distance to use and which attribute to use to produce the best results. Shall we use all attributes or certain attributes only? Computation cost is quite high because we need to compute distance of each query instance to all training samples. Some indexing (e.g. K-D tree) may reduce this computational cost.
12
K-means Unsupervised classification, i.e. no classes known beforehand
Types: Hierarchical: Successively determine new clusters from previously determined clusters (parent/child clusters). Partitional: Establish all clusters at once, at the same level.
14
Distance Measure Clustering is about finding “similarity”.
To find how similar two objects are, one needs a “distance” measure. Similar objects (same cluster) should be close to one another (short distance).
15
Distance Measure Many ways to define distance measure.
Some elements may be close according to one distance measure and further away according to another. Select a good distance measure is an important step in clustering.
16
Some Distance Functions (again)
Euclidean distance (2-norm): the most commonly used, also called “crow distance”. Manhattan distance (1-norm): also called “taxicab distance”. In general: Minkowski Metric (p-norm):
17
K-Means Clustering Separate the objects (data points) into K clusters.
Cluster center (centroid) = the average of all the data points in the cluster. Assigns each data point to the cluster whose centroid is nearest (using distance function.)
18
K-Means Algorithm Place K points into the space of the objects being clustered. They represent the initial group centroids. Assign each object to the group that has the closest centroid. Recalculate the positions of the K centroids. Repeat Steps 2 & 3 until the group centroids no longer move.
19
K-Means Algorithm: Example Output
20
Describe v. Predict
21
Predict = Decide
22
K-means "Age","Gender","Impressions","Clicks","Signed_In" 36,0,3,0,1 73,1,3,0,1 30,0,3,0,1 49,1,3,0,1 47,1,11,0,1 47,0,11,1,1 (nyt datasets) Model e.g.: If Age<45 and Impressions >5 then Gender=female (0) Age ranges? 41-45, 46-50, etc?
23
Contingency tables > table(nyt1$Impressions,nyt1$Gender) # Contingency table - displays the (multivariate) frequency distribution of the variable. Tests for significance (not now) > table(nyt1$Clicks,nyt1$Gender)
24
Classification Exercises (group1/lab2_knn1.R)
> nyt1<-read.csv(“nyt1.csv") > nyt1<-nyt1[which(nyt1$Impressions>0 & nyt1$Clicks>0 & nyt1$Age>0),] > nnyt1<-dim(nyt1)[1] # shrink it down! > sampling.rate=0.9 > num.test.set.labels=nnyt1*(1.-sampling.rate) > training <-sample(1:nnyt1,sampling.rate*nnyt1, replace=FALSE) > train<-subset(nyt1[training,],select=c(Age,Impressions)) > testing<-setdiff(1:nnyt1,training) > test<-subset(nyt1[testing,],select=c(Age,Impressions)) > cg<-nyt1$Gender[training] > true.labels<-nyt1$Gender[testing] > classif<-knn(train,test,cg,k=5) # > classif > attributes(.Last.value) # interpretation to come!
25
K Nearest Neighbors (classification)
> nyt1<-read.csv(“nyt1.csv") … from week 3 lab slides or scripts > classif<-knn(train,test,cg,k=5) # > head(true.labels) [1] > head(classif) [1] Levels: 0 1 > ncorrect<-true.labels==classif > table(ncorrect)["TRUE"] # or > length(which(ncorrect)) > What do you conclude?
26
Weighted KNN… require(kknn) data(iris) m <- dim(iris)[1] val <- sample(1:m, size = round(m/3), replace = FALSE, prob = rep(1/m, m)) iris.learn <- iris[-val,] iris.valid <- iris[val,] iris.kknn <- kknn(Species~., iris.learn, iris.valid, distance = 1, kernel = "triangular") summary(iris.kknn) fit <- fitted(iris.kknn) table(iris.valid$Species, fit) pcol <- as.character(as.numeric(iris.valid$Species)) pairs(iris.valid[1:4], pch = pcol, col = c("green3", "red”)[(iris.valid$Species != fit)+1])
27
summary Call: kknn(formula = Species ~ ., train = iris.learn, test = iris.valid, distance = 1, kernel = "triangular") Response: "nominal" fit prob.setosa prob.versicolor prob.virginica 1 versicolor 2 versicolor 3 versicolor setosa 5 virginica 6 virginica setosa 8 versicolor 9 virginica 10 versicolor virginica ......
28
table fit setosa versicolor virginica setosa versicolor virginica
29
pcol <- as.character(as.numeric(iris.valid$Species))
pairs(iris.valid[1:4], pch = pcol, col = c("green3", "red”)[(iris.valid$Species != fit)+1])
30
Ctrees? We want a means to make decisions – so how about a “if this then this otherwise that” approach == tree methods, or branching. Conditional Inference – what is that? Instead of: if (This1 .and. This2 .and. This3 .and. …)
31
Decision tree classifier
32
Conditional Inference Tree
> require(party) # don’t get me started! > str(iris) 'data.frame': 150 obs. of 5 variables: $ Sepal.Length: num $ Sepal.Width : num $ Petal.Length: num $ Petal.Width : num $ Species : Factor w/ 3 levels "setosa","versicolor",..: > iris_ctree <- ctree(Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data=iris)
33
Ctree > print(iris_ctree) Conditional inference tree with 4 terminal nodes Response: Species Inputs: Sepal.Length, Sepal.Width, Petal.Length, Petal.Width Number of observations: 150 1) Petal.Length <= 1.9; criterion = 1, statistic = )* weights = 50 1) Petal.Length > 1.9 3) Petal.Width <= 1.7; criterion = 1, statistic = ) Petal.Length <= 4.8; criterion = 0.999, statistic = )* weights = 46 4) Petal.Length > 4.8 6)* weights = 8 3) Petal.Width > 1.7 7)* weights = 46
34
plot(iris_ctree) > plot(iris_ctree, type="simple”) # try this
35
Beyond plot: pairs pairs(iris[1:4], main = "Anderson's Iris Data -- 3 species”, pch = 21, bg = c("red", "green3", "blue")[unclass(iris$Species)])
36
But the means for branching..
Do not have to be threshold based ( ~ distance) Can be cluster based = I am more similar to you if I possess these attributes (in this range) Thus: trees + cluster = hierarchical clustering In R: hclust (and others) in stats package
37
Try hclust for iris
38
gpairs(iris)
39
Better scatterplots install.packages("car") require(car) scatterplotMatrix(iris)
40
splom(iris) # default
41
splom extra! require(lattice) super.sym <- trellis.par.get("superpose.symbol") splom(~iris[1:4], groups = Species, data = iris, panel = panel.superpose, key = list(title = "Three Varieties of Iris", columns = 3, points = list(pch = super.sym$pch[1:3], col = super.sym$col[1:3]), text = list(c("Setosa", "Versicolor", "Virginica")))) splom(~iris[1:3]|Species, data = iris, layout=c(2,2), pscales = 0, varnames = c("Sepal\nLength", "Sepal\nWidth", "Petal\nLength"), page = function(...) { ltext(x = seq(.6, .8, length.out = 4), y = seq(.9, .6, length.out = 4), labels = c("Three", "Varieties", "of", "Iris"), cex = 2) }) parallelplot(~iris[1:4] | Species, iris) parallelplot(~iris[1:4], iris, groups = Species, horizontal.axis = FALSE, scales = list(x = list(rot = 90)))
44
Shift the dataset…
45
Hierarchical clustering
> d <- dist(as.matrix(mtcars)) > hc <- hclust(d) > plot(hc)
46
Data(swiss) - pairs pairs(~ Fertility + Education + Catholic, data = swiss, subset = Education < 20, main = "Swiss data, Education < 20")
47
ctree require(party) swiss_ctree <- ctree(Fertility ~ Agriculture + Education + Catholic, data = swiss) plot(swiss_ctree)
48
Hierarchical clustering
> dswiss <- dist(as.matrix(swiss)) > hs <- hclust(dswiss) > plot(hs)
49
scatterplotMatrix
50
require(lattice); splom(swiss)
51
Start collecting Your favorite plotting routines
Get familiar with annotating plots
52
Assignment 3 Preliminary and Statistical Analysis. Due October 5. 15% (written) Distribution analysis and comparison, visual ‘analysis’, statistical model fitting and testing of some of the nyt2…31 datasets. See LMS … for Assignment and details. Assignment 4 and 5 and 6 available…
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.