Motivation Data in the real world is dirty

Slides:



Advertisements
Similar presentations
UNIT – 1 Data Preprocessing
Advertisements

UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
Noise & Data Reduction. Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum.
DATA PREPROCESSING Why preprocess the data?
1 Copyright by Jiawei Han, modified by Charles Ling for cs411a/538a Data Mining and Data Warehousing v Introduction v Data warehousing and OLAP for data.
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.

Lecture Notes for Chapter 2 Introduction to Data Mining
Data Mining: Concepts and Techniques
Data Preprocessing.
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 3 —
6/10/2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning Data.
Chapter 3 Pre-Mining. Content Introduction Proposed New Framework for a Conceptual Data Warehouse Selecting Missing Value Point Estimation Jackknife estimate.
Pre-processing for Data Mining CSE5610 Intelligent Software Systems Semester 1.
Chapter 4 Data Preprocessing
Data Preprocessing.
2015年7月2日星期四 2015年7月2日星期四 2015年7月2日星期四 Data Mining: Concepts and Techniques1 Data Transformation and Feature Selection/Extraction Qiang Yang Thanks: J.
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
Chapter 1 Data Preprocessing
CS2032 DATA WAREHOUSING AND DATA MINING
The Knowledge Discovery Process; Data Preparation & Preprocessing
Ch2 Data Preprocessing part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
September 5, 2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning.
D ATA P REPROCESSING 1. C HAPTER 3: D ATA P REPROCESSING Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization.
Preprocessing of Lifelog September 18, 2008 Sung-Bae Cho 0.
The Knowledge Discovery Process; Data Preparation & Preprocessing
Outline Introduction Descriptive Data Summarization Data Cleaning Missing value Noise data Data Integration Redundancy Data Transformation.
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
Data Reduction Strategies Why data reduction? A database/data warehouse may store terabytes of data Complex data analysis/mining may take a very long time.
2015年11月6日星期五 2015年11月6日星期五 2015年11月6日星期五 Data Mining: Concepts and Techniques1 Data Preprocessing — Chapter 2 —
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach,
1 Data Mining: Concepts and Techniques Data Preprocessing.
1 Data Mining: Data Lecture Notes for Chapter 2. 2 What is Data? l Collection of data objects and their attributes l An attribute is a property or characteristic.
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas.
November 24, Data Mining: Concepts and Techniques.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Data Integration Data Integration Data integration: Combines data from multiple sources into a coherent store Challenges: Entity.

Data Cleaning Data Cleaning Importance “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball “Data.
2016年1月17日星期日 2016年1月17日星期日 2016年1月17日星期日 Data Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Chapter 2 —
Data Mining: Concepts and Techniques — Chapter 2 —
 Data integration: ◦ Combines data from multiple sources into a coherent store  Schema integration: e.g., A.cust-id  B.cust-# ◦ Integrate metadata from.
February 18, 2016Data Mining: Babu Ram Dawadi1 Chapter 3: Data Preprocessing Preprocess Steps Data cleaning Data integration and transformation Data reduction.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Data Preprocessing: Data Reduction Techniques Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
3/13/2016Data Mining 1 Lecture 1-2 Data and Data Preparation Phayung Meesad, Ph.D. King Mongkut’s University of Technology North Bangkok (KMUTNB) Bangkok.
Waqas Haider Bangyal. Classification Vs Clustering In general, in classification you have a set of predefined classes and want to know which class a new.
Tallahassee, Florida, 2016 CIS4930 Introduction to Data Mining Midterm Review Peixiang Zhao.
Data Mining What is to be done before we get to Data Mining?
Bzupages.comData Mining: Concepts and Techniques1 Data Mining: Concepts and Techniques — Slides for Textbook — — Chapter 3 — ©Jiawei Han and Micheline.
1 Web Mining Faculty of Information Technology Department of Software Engineering and Information Systems PART 4 – Data pre-processing Dr. Rakan Razouk.
June 26, C.Eng 714 Spring  Data in the real world is dirty  incomplete: lacking attribute values, lacking certain attributes of interest,
Data Mining: Data Prepossessing What is to be done before we get to Data Mining?
Pattern Recognition Lecture 20: Data Mining 2 Dr. Richard Spillman Pacific Lutheran University.
Data Mining: Concepts and Techniques
Course Outline 1. Pengantar Data Mining 2. Proses Data Mining
Data Transformation: Normalization
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Data Mining: Data Preparation
Noisy Data Noise: random error or variance in a measured variable.
UNIT-2 Data Preprocessing
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 3 —
Data Preprocessing Modified from
Chapter 1 Data Preprocessing
©Jiawei Han and Micheline Kamber
Data Transformations targeted at minimizing experimental variance
Data Mining Data Preprocessing
Presentation transcript:

CIS4930 Introduction to Data Mining Data Preprocessing Peixiang Zhao

Motivation Data in the real world is dirty Incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data Noisy: containing errors or outliers Inconsistent: containing discrepancies in codes or names No quality data, no quality mining results! Quality decisions must be based on quality data

Data Quality Measures for data quality: A multidimensional view Accuracy: correct or wrong, accurate or not Completeness: not recorded, unavailable, … Consistency: some modified but some not, dangling, … Timeliness: timely update? Believability: how trustable the data are correct? Interpretability: how easily the data can be understood?

Major Tasks in Data Preprocessing Data cleaning Fill in missing values, smooth noisy data, identify or remove outliers, and resolve inconsistencies Data integration Integration of multiple databases, data cubes, or files Data reduction Dimensionality reduction Numerosity reduction Data compression Data transformation and data discretization Normalization Concept hierarchy generation

Data Cleaning Data in the Real World Is Dirty: Lots of potentially incorrect data due to instrument fault, human/computer error, transmission error Incomplete: lacking attribute values, lacking certain attributes of interest, or containing only aggregate data, e.g., Occupation=“ ” (missing data) Noisy: containing noise, errors, or outliers e.g., Salary=“−10” (an error) Inconsistent: containing discrepancies in codes or names, e.g., Age=“42”, Birthday=“03/07/2010” Was rating “1, 2, 3”, now rating “A, B, C” discrepancy between duplicate records Intentional (e.g., disguised missing data) Jan. 1 as everyone’s birthday?

Incomplete (Missing) Data Data is not always available E.g., many tuples have no recorded value for several attributes, such as customer income in sales data Missing data may be due to Equipment malfunction Inconsistent with other recorded data and thus deleted Data not entered due to misunderstanding Certain data may not be considered important at the time of entry No register history or changes of the data Missing data may need to be inferred

How to Handle Missing Data? Ignore the tuple Not effective when the % of missing values per attribute varies considerably Fill in the missing value manually: tedious + infeasible? Fill in it automatically with A global constant : e.g., “unknown” The attribute mean The attribute mean for all samples belonging to the same class the most probable value inference-based such as Bayesian formula or decision tree

Noisy Data Noise: random error or variance in a measured variable Noise may be due to Faulty data collection instruments Data entry problems Data transmission problems Technology limitation Inconsistency in naming convention Other data problems which require data cleaning Duplicate records Incomplete data Inconsistent data

How to Handle Noisy Data? Binning First sort data and partition into (equal-frequency) bins Then one can smooth by bin means, by bin median, by bin boundaries, etc. Regression smooth by fitting the data into regression functions Clustering detect and remove outliers Combined computer and human inspection detect suspicious values and check by human (e.g., deal with possible outliers)

Data Integration Definition Key issues Combines data from multiple sources into a coherent store Key issues Schema integration e.g., A.cust-id  B.cust-#. Integrate metadata from different sources Entity identification problem Identify real world entities from multiple data sources e.g., Bill Clinton = William Clinton Detecting and resolving data value conflicts For the same real-world entity, attribute values from different sources are different Possible reasons: different representations, different scales, e.g., metric vs. British units

Example

Handling Redundancy in Data Integration Evidence: careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality Redundant data occur often when integration of multiple data sources Object identification: The same attributes or objects may have different names in different datasets Derivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenue vs. monthly revenues Redundant attributes may be detected by correlation analysis and covariance analysis

Correlation Analysis Χ2 (chi-square) test Video (Khan Academy) The larger the Χ2 value, the more likely the variables are related The cells that contribute the most to the Χ2 value are those whose actual count is very different from the expected count Correlation does not imply causality # of hospitals and # of car-theft in a city are correlated Both are causally linked to the third variable: population Video (Khan Academy) Χ2 Distribution, contingency table

Chi-Square Calculation: An Example Χ2 (chi-square) calculation (numbers in parenthesis are expected counts calculated based on the data distribution in the two categories) It shows that like_science_fiction and play_chess are correlated in the group Play chess Not play chess Sum (row) Like science fiction 250(90) 200(360) 450 Not like science fiction 50(210) 1000(840) 1050 Sum(col.) 300 1200 1500 The expected value can be computed as follows, For “like science fiction” and “play chess”, the expected value is 300 * 450/1500 = 90

Covariance The covariance of (X,Y) is The correlation of (X,Y) is cov(X,Y)=E([X−E(X)][Y−E(Y)]) =E(XY)- E(X)E(Y) The correlation of (X,Y) is cor(X,Y)=cov(X,Y)/sd(X)sd(Y) Correlation is a scaled version of covariance, and the two parameters always have the same sign a.k.a, Pearson’s product moment coefficient when the sign is positive, the variables are said to be positively correlated when the sign is negative, the variables are said to be negatively correlated when the sign is 0, the variables are said to be uncorrelated

Visually Evaluating Correlation Scatter plots showing the correlation from –1 to 1

Covariance: An Example Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14). Question: If the stocks are affected by the same industry trends, will their prices rise or fall together? E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4 E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.6 Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4 Thus, A and B rise together since Cov(A, B) > 0. Here we stop at the computation for cov, because, \tau(A) and \tau(B) are always positive

Data Reduction Motivation Strategies Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same (or almost the same) analytical results Strategies Dimensionality reduction, e.g., remove unimportant attributes Wavelet transforms Principal Components Analysis (PCA) Feature subset selection, feature creation Numerosity reduction Regression and Log-Linear Models Histograms, clustering, sampling Data cube aggregation Data compression

Dimensionality Reduction Curse of dimensionality When dimensionality increases, data becomes increasingly sparse Density and distance between points, which is critical to clustering, outlier analysis, becomes less meaningful The possible combinations of subspaces will grow exponentially Dimensionality reduction Avoid the curse of dimensionality Help eliminate irrelevant features and reduce noise Reduce time and space required in data mining Allow easier visualization Dimensionality reduction techniques Wavelet transforms Principal Component Analysis Supervised and nonlinear techniques (e.g., feature selection)

Principal Component Analysis (PCA) Find a projection that captures the largest amount of variation in data The original data are projected onto a much smaller space, resulting in dimensionality reduction The direction with the largest projected variance is called the first principal component The orthogonal direction that captures the second largest projected variance is called the second principal component and so on… The direction that maximizes the variance is also the one that minimizes the mean squared error

An Motivating Example A dataset resulting from a survey of pilots for radio-controlled helicopters x1 is a measure of the piloting skill of an pilot x2 captures how much he/she enjoys flying The two attributes x1 and x2 are strongly correlated! The data actually lie along some diagonal axis (u1) Question: how can we automatically compute u1 ?

PCA Problem Formulation Reduce from d-dimension to k-dimension Find k vectors u1, u2, …,uk onto which to project data so as to maximize the variance of the projected data minimize the projection error (squared projection distance)

PCA: General Procedure Given n data vectors from d-dimensions, find k ≤ d orthogonal vectors (principal components) that can be best used to represent data Normalize input data: Each attribute falls within the same range Compute k orthonormal (unit) vectors, i.e., principal components Each input data (vector) is a linear combination of the k principal component vectors The principal components are sorted in order of decreasing “significance” or strength Since the components are sorted, the size of the data can be reduced by eliminating the weak components, i.e., those with low variance (i.e., using the strongest principal components, it is possible to reconstruct a good approximation of the original data) Works for numeric data only

PCA: Input Input: an n*d data matrix D Rows: also called instances, examples, records, transactions, objects, points, feature-vectors, etc. Given as a d-tuple xi = (xi1, xi2, . . . , xid) Columns: also called attributes, properties, features, dimensions, variables, fields, etc. Given as an n-tuple xj = (x1j, x2j, . . . , xnj)

An Example

PCA: Algorithm Preprocessing the data to normalize its mean and variance The mean of the data matrix D is the average of all the points Replace each xi with xi – μ to get the centered data matrix Z Steps 1-2 zero out the mean of the data

PCA: Algorithm Preprocessing the data to normalize its mean and variance (optional) Let 𝜎 𝑗 2 = 1 𝑛 𝑖 (𝑥 𝑖𝑗 ) 2 , 1 ≤𝑗≤𝑑 Replace each 𝑥 𝑖𝑗 with 𝑥 𝑖𝑗 𝜎 𝑗 Steps (3-4) rescale each coordinate to have unit variance, which ensures that different attributes are all treated on the same “scale” May be omitted if we had a priori knowledge that the different attributes are all on the same scale

PCA: Algorithm Reduce data from d-dimensions to k-dimensions Compute the covariance matrix (n*n) = 1 𝑛 𝑖=1 𝑛 ( 𝑥 𝑖 ) ( 𝑥 𝑖 ) 𝑇 Compute “eigenvectors” of the matrix ∑ [U, S, V] = svd(∑) U: n*n matrix with eigenvectors S: d*d diagonal matrix with eigenvalues: λ1, λ2, …, λd Select the first k columns of U matrix as the k principal components: Ureduce (n*k) For 𝒙 ∈ 𝑹 𝒅 , the new representation in the k-dimensional space is (Ureduce)T * x (k*1 vector)

Choosing the Dimensionality One criteria for choosing k is to compute the fraction of the total variance captured by the first k principal components Given a certain desired variance threshold, say α, starting from the first principal component, we keep on adding additional components, and stop at the smallest value k, for which f (k) ≥ α we select the fewest number of dimensions such that the subspace spanned by those k dimensions captures at least fraction of the total variance

PCA: Algorithm

PCA: Example

PCA: In Theory Given a unit vector u and a point x, the length of the projection of x onto u is xTu, the distance from the origin Goal: to maximize the variance of the projections, we need choose a unit-length u, s.t. 𝟏 𝒏 𝒊=𝟏 𝒏 (( 𝒙 𝒊 ) 𝑻 𝒖) 𝟐 = 𝟏 𝒏 𝒊=𝟏 𝒏 𝒖 𝑻 𝒙 𝒊 𝒙 𝒊 𝑻 𝒖= 𝒖 𝑻 ( 𝟏 𝒏 𝒊=𝟏 𝒏 𝒙 𝒊 𝒙 𝒊 𝑻 )𝒖 This gives the principal eigenvector of the covariance matrix of the data!

Optimal and Non-optimal 2D Approximations The optimal subspace maximizes the variance, and minimizes the squared error, whereas the non-optimal subspace captures less variance, and has a high mean squared error value, as seen from the lengths of the error vectors (line segments)

Singular Vector Decomposition (SVD) Principal components analysis is a special case of a more general matrix decomposition method called Singular Value Decomposition (SVD) SVD Input: the data matrix D(n*d) Dnxd = Unxn Snxd VTdxd U, V: orthogonal matrixes Columns of U are the left singular vectors Columns of V (Rows of VT) are the right singular vectors S has singular values and is diagonal

SVD: An Example We discard the left and right singular vectors that correspond to zero singular values, to obtain the reduced SVD as D = Ur Sr VTr

SVD We consider these two special matrixes DDT and DTD The eigenvectors of DDT make up the columns of U The eigenvectors of DTD make up the columns of V We use DDT as covariance matrix in PCA! The singular values in S are the diagonal entries of S and are arranged in descending order The singular values in S are the square root of eigenvalues from DT D or DDT D = USVT DT = VSTUT DDT = USVT VSTUT = US2UT

Heuristic Search in Attribute Selection There are 2d possible attribute combinations of d attributes Typical heuristic attribute selection methods: Best single attribute under the attribute independence assumption: choose by significance tests Best step-wise feature selection: The best single-attribute is picked first Then next best attribute condition to the first, ... Step-wise attribute elimination: Repeatedly eliminate the worst attribute Best combined attribute selection and elimination Optimal branch and bound: Use attribute elimination and backtracking

Numerosity Reduction Reduce data volume by choosing alternative, smaller forms of data representation Parametric methods Assume the data fits some model, estimate model parameters, store only the parameters, and discard the data (except possible outliers) Ex.: Regression Non-parametric methods Do not assume models Major families: histograms, clustering, sampling, …

Regression Analysis Modeling and analysis of numerical data consisting of values of a dependent variable (also called response variable or measurement) Values of one or more independent variables (aka. explanatory variables or predictors) The parameters are estimated so as to give a "best fit" of the data Most commonly the best fit is evaluated by using the least squares method, but other criteria have also been used

Regression Analysis Applications prediction (including forecasting of time-series data) inference hypothesis testing modeling of causal relationships x y = x + 1 X1 Y1 Y1’ Real value Regression value

Regression Analysis Linear regression: Y = w X + b Two regression coefficients, w and b, specify the line and are to be estimated by using the data at hand Using the least squares criterion to the known values of Y1, Y2, …, X1, X2, …. Multiple regression: Y = b0 + b1 X1 + b2 X2 Many nonlinear functions can be transformed into the above Nonlinear regression

Histogram Analysis Divide data into buckets and store average (sum) for each bucket Partitioning rules: Equal-width: equal bucket range Equal-frequency (or equal-depth)

Sampling Obtaining a small sample s to represent the whole data set N Allow a mining algorithm to run in complexity that is potentially sub-linear to the size of the data Key principle: Choose a representative subset of the data Simple random sampling may have very poor performance in the presence of skew Develop adaptive sampling methods, e.g., stratified sampling Note: Sampling may not reduce database I/Os (page at a time)

Sampling Simple random sampling There is an equal probability of selecting any particular item Sampling without replacement Once an object is selected, it is removed from the population Sampling with replacement A selected object is not removed from the population Stratified sampling: Partition the data set, and draw samples from each partition (proportionally, i.e., approximately the same percentage of data) Used in conjunction with skewed data

Sampling: An Example Raw Data SRSWOR (simple random sample without replacement) SRSWR 44

Stratified Sampling: An Example Raw Data Cluster/Stratified Sample 45

Data Transformation A function that maps the entire set of values of a given attribute to a new set of replacement values s.t. each old value can be identified with one of the new values Methods Smoothing: Remove noise from data Attribute/feature construction New attributes constructed from the given ones Aggregation: Summarization, data cube construction Normalization: Scaled to fall within a smaller, specified range min-max normalization z-score normalization normalization by decimal scaling Discretization: Concept hierarchy climbing

Normalization Min-max normalization: to [new_minA, new_maxA] Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then $73,000 is mapped to Z-score normalization (μ: mean, σ: standard deviation): Ex. Let μ = 54,000, σ = 16,000. Then Normalization by decimal scaling Where j is the smallest integer such that Max(|ν’|) < 1

Discretization Divide the range of a continuous attribute into intervals Interval labels can then be used to replace actual data values Reduce data size by discretization Split (top-down) vs. merge (bottom-up) Supervised vs. unsupervised Discretization can be performed recursively on an attribute Prepare for further analysis, e.g., classification

Discretization Typical methods: All the methods can be applied recursively Binning Top-down split, unsupervised Histogram analysis Clustering analysis (unsupervised, top-down split or bottom-up merge) Decision-tree analysis (supervised, top-down split) Correlation (e.g., 2) analysis (unsupervised, bottom-up merge)

Binning Equal-width (distance) partitioning Divides the range into N intervals of equal size: uniform grid if A and B are the lowest and highest values of the attribute, the width of intervals will be: W = (B –A)/N The most straightforward, but outliers may dominate presentation Skewed data is not handled well Equal-depth (frequency) partitioning Divides the range into N intervals, each containing approximately same number of samples Good data scaling Managing categorical attributes can be tricky

Binning: Examples Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into equal-frequency (equi-depth) bins Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Smoothing by bin means Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 Smoothing by bin boundaries Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34