Presentation is loading. Please wait.

Presentation is loading. Please wait.

DATA MINING Ronald Westra Dep. Mathematics Knowledge Engineering

Similar presentations


Presentation on theme: "DATA MINING Ronald Westra Dep. Mathematics Knowledge Engineering"— Presentation transcript:

1 DATA MINING Ronald Westra Dep. Mathematics Knowledge Engineering
from data to information Ronald Westra Dep. Mathematics Knowledge Engineering Maastricht University

2 PART 1 Introduction

3 All information on math-part of course on:
All information on math-part of course on:

4

5

6 Data mining - a definition
"Data mining is the process of exploration and analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns and results."   (Berry & Linoff, 1997, 2000)

7 DATA MINING Course Description:
Course Description: In this course the student will be made familiar with the main topics in Data Mining, and its important role in current Computer Science. In this course we’ll mainly focus on algorithms, methods, and techniques for the representation and analysis of data and information.

8 DATA MINING Course Objectives:
Course Objectives: To get a broad understanding of data mining and knowledge discovery in databases. To understand major research issues and techniques in this new area and conduct research. To be able to apply data mining tools to practical problems.

9 LECTURE 1: Introduction
Fayyad, U., Piatetsky-Shapiro, G., and Smyth, P. (1996), Data Mining to Knowledge Discovery in Databases: Hand, D., Manilla, H., Smyth, P. (2001), Principles of Data Mining, MIT press, Boston, USA MORE INFORMATION ON: ELEUM and:

10 Hand, D., Manilla, H., Smyth, P. (2001),
Principles of Data Mining, MIT press, Boston, USA + MORE INFORMATION ON: ELEUM or DAM-website

11 LECTURE 1: Introduction
What is Data Mining? • data information knowledge • patterns structures models The use of Data Mining • increasingly larger databases TB (TeraBytes) • N datapoints and K components (fields) per datapoint • not accessible for fast inspection • incomplete, noise, wrong design • different numerical formats, alfanumerical, semantic fields • necessity to automate the analysis

12 LECTURE 1: Introduction
Applications • astronomical databases • marketing/investment • telecommunication • industrial • biomedical/genetica

13 LECTURE 1: Introduction
Historical Context • in mathematical statistics negative connotation: • danger for overfitting and erroneous generalisation

14 LECTURE 1: Introduction
Data Mining Subdisciplines • Databases • Statistics • Knowledge Based Systems • High-performance computing • Data visualization • Pattern recognition • Machine learning

15 LECTURE 1: Introduction
Data Mining -methodes • Clustering • classification (off- & on-line) • (auto)-regression • visualisation techniques: optimal projections and PCA (principal component analysis) • discrimnant analysis • decomposition • parameteriical modelling • non-parameteric modeling

16 LECTURE 1: Introduction
Data Mining essentials • model representation • model evaluation • search/optimisation Data Mining algorithms • Decision trees/Rules • Nonlinear Regression and Klassificatie • Example-based methods • AI-tools: NN, GA, ...

17 LECTURE 1: Introduction
Data Mining and Mathematical Statistics • when Statistics and when DM? • is DM a sort of Mathematical Statistics? Data Mining and AI • AI is instrumental in finding knowledge in large chunks of data

18 Mathematical Principles in Data Mining
Part I: Exploring Data Space * Understanding and Visualizing Data Space Provide tools to understand the basic structure in databases. This is done by probing and analysing metric structure in data-space, comprehensively visualizing data, and analysing global data structure by e.g. Principal Components Analysis and Multidimensional Scaling. * Data Analysis and Uncertainty Show the fundamental role of uncertainty in Data Mining. Understand the difference between uncertainty originating from statistical variation in the sensing process, and from imprecision in the semantical modelling. Provide frameworks and tools for modelling uncertainty: especially the frequentist and subjective/conditional frameworks.

19 Mathematical Principles in Data Mining
PART II: Finding Structure in Data Space * Data Mining Algorithms & Scoring Functions Provide a measure for fitting models and patterns to data. This enables the selection between competing models. Data Mining Algorithms are discussed in the parallel course. * Searching for Models and Patterns in Data Space Describe the computational methods used for model and pattern-fitting in data mining algorithms. Most emphasis is on search and optimisation methods. This is required to find the best fit between the model or pattern with the data. Special attention is devoted to parameter estimation under missing data using the maximum likelihood EM-algorithm.

20 Mathematical Principles in Data Mining
PART III: Mathematiscal Modelling of Data Space * Descriptive Models for Data Space Present descriptive models in the context of Data Mining. Describe specific techniques and algorithms for fitting descriptive models to data. Main emphasis here is on probabilistic models. * Clustering in Data Space Discuss the role of data clustering within Data Mining. Showing the relation of clustering in relation to classification and search. Present a variety of paradigms for clustering data.

21 EXAMPLES * Astronomical Databases
* Phylogenetic trees from DNA-analysis

22 Example 1: Phylogenetic Trees
The last decade has witnessed a major and historical leap in biology and all related disciplines. The date of this event can be set almost exactly to November 1999 as the Humane Genome Project (HGP) was declared completed. The HGP resulted in (almost) the entire humane genome, consisting of about base pairs (bp) code, constituting all approximately 35K humane genes. Since then the genomes of many more animal and plant species have come available. For our sake, we can consider the humane genome as a huge database, existing of a single string with characters from the set {C,G,A,T}.

23 Example 1: Phylogenetic Trees
This data constitutes the human ‘source code’. From this data – in principle – all ‘hardware’ characteristics, such as physiological and psychological features, can be deduced. In this block we will concentrate on another aspect that is hidden in this information: phylogenetic relations between species. The famous evolutionary biologist Dobzhansky once remarked that: ‘Everything makes sense in the light of evolution, nothing makes sense without the light of evolution’. This most certainly applies to the genome. Hidden in the data is the evolutionary history of the species. By comparing several species with various amount of relatedness, we can from systematic comparison reconstruct this evolutionary history. For instance, consider a species that lived at a certain time in earth history. It will be marked by a set of genes, each with a specific code (or rather, a statistical variation around the average).

24 Example 1: Phylogenetic Trees
If this species is by some reason distributed over a variety of non-connected areas (e.g. islands, oases, mountainous regions), animals of the species will not be able to mate at a random. In the course of time, due to the accumulation of random mutations, the genomes of the separated groups will increasingly differ. This will result in the origin of sub-species, and eventually new species. Comparing the genomes of the new species will shed light on the evolutionary history, in that: we can draw a phylogenetic tree of the sub-species leading to the ‘founder’-species; given the rate of mutation we can estimate how long ago the founder-species lived; reconstruct the most probable genome of the founder-species.

25

26

27

28

29

30

31 Example 2: data mining in astronomy

32 Example 2: data mining in astronomy

33 Example 2: data mining in astronomy

34

35

36 DATA AS SETS OF MEASUREMENTS AND OBSERVATIONS
Data Mining Lecture II [Chapter 2 from Principles of Data Mining by Hand,, Manilla, Smyth ]

37 LECTURE 2: DATA AS SETS OF MEASUREMENTS AND OBSERVATIONS
Readings: • Chapter 2 from Principles of Data Mining by Hand, Mannila, Smyth.

38 2.1 Types of Data 2.2 Sampling 1. (re)sampling 2. oversampling/undersampling, sampling artefacts 3. Bootstrap and Jack-Knife methodes 2.3 Measures for Similarity and Difference 1. Phenomenological 2. Dissimilarity coefficient 3. Metric in Data Space based on distance measure

39 Types of data Sampling : Resampling :
– the process of collecting new (empirical) data Resampling : – selecting data from a larger already existing collection

40 Sampling Oversampling Undersampling
Sampling artefacts (aliasing, Nyquist frequency)

41 Sampling artefacts (aliasing, Nyquist frequency)
Moire fringes

42 Resampling Resampling is any of a variety of methods for doing one of the following: Estimating the precision of sample statistics (medians, variances, percentiles) by using subsets of available data (= jackknife) or drawing randomly with replacement from a set of data points (= bootstrapping) Exchanging labels on data points when performing significance tests (permutation test, also called exact test, randomization test, or re-randomization test) Validating models by using random subsets (bootstrap, cross validation)

43 Bootstrap & Jack-Knife methodes
using inferential statistics to account for randomness and uncertainty in the observations. These inferences may take the form of answers to essentially yes/no questions (hypothesis testing), estimates of numerical characteristics (estimation), prediction of future observations, descriptions of association (correlation), or modeling of relationships (regression).

44 Bootstrap method bootstrapping is a method for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. "Bootstrap" means that resampling one available sample gives rise to many others, reminiscent of pulling yourself up by your bootstraps. cross-validation: verify replicability of results Jackknife: detect outliers Bootstrap: inferential statistics

45 2.3 Measures for Similarity and Dissimilarity
1. Phenomenological 2. Dissimilarity coefficient 3. Metric in Data Space based on distance measure

46 2.4 Distance Measure and Metric
1. Euclidean distance 2. Metric 3. Commensurability 4. Normalisatie 5. Weighted Distances 6. Sample covariance 7. Sample covariance correlation coefficient 8. Mahalanobis distance 9. Normalised distance and Cluster Separation (zie aanvullende tekst) Generalised Minkowski

47 2.4 Distance Measure and Metric
1. Euclidean distance

48 2.4 Distance Measure and Metric
2. Generalized p-norm

49 Generalized Norm / Metric

50 Minkowski Metric

51 Minkowski Metric

52 Generalized Minkowski Metric
In the data space is already a structure present. The structure is represented by the correlation and given by the covariance matrix G The Minkowski-norm of a vector x is:

53 2.4 Distance Measure and Metric
Euclidean distance 2. Metric 3. Commensurability 4. Normalisatie 5. Weighted Distances 6. Sample covariance 7. Sample covariance correlation coefficient 8. Mahalanobis distance 9. Normalised distance and Cluster Separation (zie aanvullende tekst) Generalised Minkowski

54 2.4 Distance Measure and Metric
Mahalanobis distance

55 2.4 Distance Measure and Metric Mahalanobis distance
The Mahalanobis distance is a distance measure introduced by P. C. Mahalanobis in 1936. It is based on correlations between variables by which different patterns can be identified and analysed. It is a useful way of determining similarity of an unknown sample set to a known one. It differs from Euclidean distance in that it takes into account the correlations of the data set.

56 2.4 Distance Measure and Metric
Mahalanobis distance The Mahalanobis distance from a group of values with mean and covariance matrix Σ for a multivariate vector is defined as:

57 Mahalanobis distance 2.4 Distance Measure and Metric
Mahalanobis distance can also be defined as dissimilarity measure between two random vectors x and y of the same distribution with the covariance matrix Σ :

58 2.4 Distance Measure and Metric
Mahalanobis distance If the covariance matrix is the identity matrix then it is the same as Euclidean distance. If covariance matrix is diagonal, then it is called normalized Euclidean distance: where σi is the standard deviation of the xi over the sample set.

59 2.4 Distance measures and Metric
8. Mahalanobis distance

60 2.4 Distance measures and Metric
8. Mahalanobis distance

61 2.4 Distance measures and Metric
8. Mahalanobis distance

62 2.5 Distortions in Data Sets
1. outlyers 2. Variance 3. sampling effects 2.6 Pre-processong data with mathematical transformationes 2.7 Data Quality • Data quality of individual measurements [GIGO] • Data quality of Data collections

63

64 Part II. Exploratory Data Analysis

65 VISUALISING AND EXPLORING DATA-SPACE
Data Mining Lecture III [Chapter 2 from Principles of Data Mining by Hand,, Manilla, Smyth ]

66 LECTURE 3: Visualising and Exploring Data-Space
Readings: • Chapter 3 from Principles of Data Mining by Hand, Mannila, Smyth. 3.1 Obtain insight in the Structure in Data Space 1. distribution over the space 2. Are there separate and disconnected parts? 3. is there a model? 4. data-driven hypothesis testing 5. Starting point: use strong perceptual powers of humans

67 LECTURE 3: Visualising and Exploring Data-Space
3.2 Tools to represent a variabele 1. mean, variance, standard deviation, skewness 2. plot 3. moving-average plot 4. histogram, kernel

68 histogram

69 LECTURE 3: Visualising and Exploring Data-Space
3.3 Tools for repressenting two variables 1. scatter plot 2. moving-average plots

70 scatter plot

71 scatter plots

72 LECTURE 3: Visualising and Exploring Data-Space
3.4 Tools for representing multiple variables 1. all or selection of scatter plots 2. idem moving-average plots 3. ‘trelis’ or other parameterised plots 4. icons: star icons, Chernoff’s faces

73 Chernoff’s faces

74 Chernoff’s faces

75 Chernoff’s faces

76 DIMENSION REDUCTION 3.5 PCA: Principal Component Ananlysis
3.6 MDS: Multidimensional Scaling DIMENSION REDUCTION

77 3.5 PCA: Principal Component Ananlysis
1. With sub-scatter plots we already noticed that the best projections were determined by the projection that resulted in the optimal spreading of the set of points. This is in the direction of the smallest varaince. This idea is now worked out.

78 3.5 PCA: Principal Component Analysis
2. Underlying idea : supose you have a high-dimensional normal-distributed data set. This will take the shape of a high-dimensional ellipsoid. An ellipsoid is structured from its centre by orthogonal vectors with different radii. The largest radii have the strongest influence on the shape of the ellipsoid. The ellipsoid is described by the covariance-matrix of the set of data-points. The axes are defined by the orthogonal eigen-vectors (from the centre – the centroid – of the set), the radii are defined by the associated values. So determine the eigen-values and order those in decreasing size: . The first n ordered eigen-vectors thus ‘explain’ the following amount of the data: .

79 3.5 PCA: Principal Component Ananlysis

80 3.5 PCA: Principal Component Ananlysis

81 3.5 PCA: Principal Component Ananlysis
MEAN

82 3.5 PCA: Principal Component Ananlysis
Principal axis 2 Principal axis 1 MEAN

83 3.5 PCA: Principal Component Ananlysis
3. Plot the ordered eigen-values versus the index-number and inspect where a ‘shoulder’ occurs: this determines the number of eigen-values you take into acoount. This is a so-called ‘scree-plot’.

84 3.5 PCA: Principal Component Ananlysis
Other derivation is by maximization of the variance of the projected data geprojecteerde data (see book). This leads to an Eigen-value problem of the covariance-matrix, so the solution described above.

85 3.5 PCA: Principal Component Ananlysis
5. For n points of p components there are: O(np2 + p3) operations required. Use LU-decomposition etcetera.

86 3.5 PCA: Principal Component Ananlysis
6. Many benefits: considerable data-reduction, necessary for computational techniques like ‘Fisher-discriminant-analysis’ and ‘clustering’. This works very well in practice.

87 3.5 PCA: Principal Component Ananlysis
7. NB: Factor Analysis is often confused with PCA but is different: the explanation of p-dimensional data by a smaller number of m < p factors.

88 Principal component analysis (PCA) is a technique that is useful for the
compression and classification of data. The purpose is to reduce the dimensionality of a data set (sample) by finding a new set of variables, smaller than the original set of variables, that nonetheless retains most of the sample's information. By information we mean the variation present in the sample, given by the correlations between the original variables. The new variables, called principal components (PCs), are uncorrelated, and are ordered by the fraction of the total information each retains.

89 overview geometric picture of PCs algebraic definition and derivation of PCs usage of PCA astronomical application

90 Geometric picture of principal components (PCs)
A sample of n observations in the 2-D space Goal: to account for the variation in a sample in as few variables as possible, to some accuracy

91 Geometric picture of principal components (PCs)
the 1st PC is a minimum distance fit to a line in space the 2nd PC is a minimum distance fit to a line in the plane perpendicular to the 1st PC PCs are a series of linear least squares fits to a sample, each orthogonal to all the previous.

92 Algebraic definition of PCs
Given a sample of n observations on a vector of p variables define the first principal component of the sample by the linear transformation λ where the vector is chosen such that is maximum

93 Algebraic definition of PCs
Likewise, define the kth PC of the sample by the linear transformation λ where the vector is chosen such that is maximum subject to and to

94 Algebraic derivation of coefficient vectors
To find first note that where is the covariance matrix for the variables

95 Algebraic derivation of coefficient vectors
To find maximize subject to Let λ be a Lagrange multiplier then maximize by differentiating… therefore is an eigenvector of corresponding to eigenvalue

96 Algebraic derivation of
We have maximized So is the largest eigenvalue of The first PC retains the greatest amount of variation in the sample.

97 Algebraic derivation of coefficient vectors
To find the next coefficient vector maximize subject to and to First note that then let λ and φ be Lagrange multipliers, and maximize

98 Algebraic derivation of coefficient vectors
We find that is also an eigenvector of whose eigenvalue is the second largest. In general The kth largest eigenvalue of is the variance of the kth PC. The kth PC retains the kth greatest fraction of the variation in the sample.

99 Algebraic formulation of PCA
Given a sample of n observations on a vector of p variables define a vector of p PCs according to where is an orthogonal p x p matrix whose kth column is the kth eigenvector of Then is the covariance matrix of the PCs, being diagonal with elements

100 usage of PCA: Probability distribution for sample PCs
If (i) the n observations of in the sample are independent & is drawn from an underlying population that follows a p-variate normal (Gaussian) distribution with known covariance matrix then where is the Wishart distribution else utilize a bootstrap approximation

101 usage of PCA: Probability distribution for sample PCs
If (i) follows a Wishart distribution & the population eigenvalues are all distinct then the following results hold as all the are independent of all the and are jointly normally distributed (a tilde denotes a population quantity)

102 usage of PCA: Probability distribution for sample PCs
and (a tilde denotes a population quantity)

103 usage of PCA: Inference about population PCs
If follows a p-variate normal distribution then analytic expressions exist* for MLE’s of , , and confidence intervals for and hypothesis testing for and else bootstrap and jackknife approximations exist *see references, esp. Jolliffe

104 usage of PCA: Practical computation of PCs
In general it is useful to define standardized variables by If the are each measured about their sample mean then the covariance matrix of will be equal to the correlation matrix of and the PCs will be dimensionless

105 usage of PCA: Practical computation of PCs
Given a sample of n observations on a vector of p variables (each measured about its sample mean) compute the covariance matrix where is the n x p matrix whose ith row is the ith obsv. Then compute the n x p matrix whose ith row is the PC score for the ith observation.

106 usage of PCA: Practical computation of PCs
Write to decompose each observation into PCs

107 usage of PCA: Data compression
Because the kth PC retains the kth greatest fraction of the variation we can approximate each observation by truncating the sum at the first m < p PCs

108 usage of PCA: Data compression
Reduce the dimensionality of the data from p to m < p by approximating where is the n x m portion of and is the p x m portion of

109 astronomical application: PCs for elliptical galaxies
Rotating to PC in BT – Σ space improves Faber-Jackson relation as a distance indicator Dressler, et al. 1987

110 astronomical application: Eigenspectra (KL transform)
Connolly, et al. 1995

111 references Connolly, and Szalay, et al., “Spectral Classification of Galaxies: An Orthogonal Approach”, AJ, 110, , 1995. Dressler, et al., “Spectroscopy and Photometry of Elliptical Galaxies. I. A New Distance Estimator”, ApJ, 313, 42-58, 1987. Efstathiou, G., and Fall, S.M., “Multivariate analysis of elliptical galaxies”, MNRAS, 206, , 1984. Johnston, D.E., et al., “SDSS J : A New Gravitational Lens”, AJ, 126, , 2003. Jolliffe, Ian T., 2002, Principal Component Analysis (Springer-Verlag New York, Secaucus, NJ). Lupton, R., 1993, Statistics In Theory and Practice (Princeton University Press, Princeton, NJ). Murtagh, F., and Heck, A., Multivariate Data Analysis (D. Reidel Publishing Company, Dordrecht, Holland). Yip, C.W., and Szalay, A.S., et al., “Distributions of Galaxy Spectral Types in the SDSS”, AJ, 128, , 2004.

112 3.5 PCA: Principal Component Ananlysis

113

114

115

116

117 1 pc 2 pc 4 pc 3 pc

118 3.6 Multidimensional Scaling [MDS]
1. Same purpose : represent high-dimensional data set 2. In the case of MS not by projections, but by reconstruction from the distance-table. The computed points are represented in an Euclidian sub-space – preferably a 2D-plane. 3. MDS performs better than PCA in case of strongly curved sets.

119 3.6 Multidimensional Scaling
The purpose of multidimensional scaling (MDS) is to provide a visual representation of the pattern of proximities (i.e., similarities or distances) among a set of objects INPUT: distances dist[Ai,Aj] where A is some class of objects OUTPUT: positions X[Ai] where X is a D-dimensional vector

120 3.6 Multidimensional Scaling

121 3.6 Multidimensional Scaling

122 3.6 Multidimensional Scaling
INPUT: distances dist[Ai,Aj] where A is some class of objects

123 3.6 Multidimensional Scaling
OUTPUT: positions X[Ai] where X is a D-dimensional vector

124 3.6 Multidimensional Scaling
How many dimensions ??? SCREE PLOT

125 Multidimensional Scaling: Nederlandse dialekten

126

127

128 3.6 Kohonen’s Self Organizing Map (SOM) and Sammon mapping
1. Same purpose : DIMENSION REDUCTION : represent a high dimensional set in a smaller sub-space e.g. 2D-plane. 2. SOM gives better results than Sammon mapping, but strongly sensitive to initial values. 3. This is close to clustering!

129 3.6 Kohonen’s Self Organizing Map (SOM)

130 3.6 Kohonen’s Self Organizing Map (SOM)

131 Sammon mapping

132

133 1. Clustering versus Classification
• classification: give a pre-determined label to a sample • clustering: provide the relevant labels for classification from structure in a given dataset • clustering: maximal intra-cluster similarity and maximal inter-cluster dissimilarity • Objectives: segmentation of space - 2. find natural subclasses

134 The End


Download ppt "DATA MINING Ronald Westra Dep. Mathematics Knowledge Engineering"

Similar presentations


Ads by Google