Data Mining Modified from

Slides:



Advertisements
Similar presentations
UNIT – 1 Data Preprocessing
Advertisements

UNIT-2 Data Preprocessing LectureTopic ********************************************** Lecture-13Why preprocess the data? Lecture-14Data cleaning Lecture-15Data.
DATA PREPROCESSING Why preprocess the data?
1 Copyright by Jiawei Han, modified by Charles Ling for cs411a/538a Data Mining and Data Warehousing v Introduction v Data warehousing and OLAP for data.
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.

Data Mining: Concepts and Techniques
Data Preprocessing.
6/10/2015Data Mining: Concepts and Techniques1 Chapter 2: Data Preprocessing Why preprocess the data? Descriptive data summarization Data cleaning Data.
Chapter 3 Pre-Mining. Content Introduction Proposed New Framework for a Conceptual Data Warehouse Selecting Missing Value Point Estimation Jackknife estimate.
Pre-processing for Data Mining CSE5610 Intelligent Software Systems Semester 1.
Chapter 4 Data Preprocessing
Data Preprocessing.
2015年7月2日星期四 2015年7月2日星期四 2015年7月2日星期四 Data Mining: Concepts and Techniques1 Data Transformation and Feature Selection/Extraction Qiang Yang Thanks: J.
Major Tasks in Data Preprocessing(Ref Chap 3) By Prof. Muhammad Amir Alam.
Chapter 1 Data Preprocessing
CS2032 DATA WAREHOUSING AND DATA MINING
Ch2 Data Preprocessing part2 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
J. Leskovec, A. Rajaraman, J. Ullman: Mining of Massive Datasets, 2.
Outline Introduction Descriptive Data Summarization Data Cleaning Missing value Noise data Data Integration Redundancy Data Transformation.
Data Reduction Strategies Why data reduction? A database/data warehouse may store terabytes of data Complex data analysis/mining may take a very long time.
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2010.
2015年11月6日星期五 2015年11月6日星期五 2015年11月6日星期五 Data Mining: Concepts and Techniques1 Data Preprocessing — Chapter 2 —
Data Preprocessing Dr. Bernard Chen Ph.D. University of Central Arkansas.
November 24, Data Mining: Concepts and Techniques.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.

Data Cleaning Data Cleaning Importance “Data cleaning is one of the three biggest problems in data warehousing”—Ralph Kimball “Data.
January 17, 2016Data Mining: Concepts and Techniques 1 What Is Data Mining? Data mining (knowledge discovery from data) Extraction of interesting ( non-trivial,
Data Mining: Concepts and Techniques — Chapter 2 —
Data Mining and Decision Support
February 18, 2016Data Mining: Babu Ram Dawadi1 Chapter 3: Data Preprocessing Preprocess Steps Data cleaning Data integration and transformation Data reduction.
Data Preprocessing Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Mining of Massive Datasets Edited based on Leskovec’s from
Data Preprocessing: Data Reduction Techniques Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Waqas Haider Bangyal. Classification Vs Clustering In general, in classification you have a set of predefined classes and want to know which class a new.
Data Mining What is to be done before we get to Data Mining?
Data Mining: Data Prepossessing What is to be done before we get to Data Mining?
Book web site:
Pattern Recognition Lecture 20: Data Mining 2 Dr. Richard Spillman Pacific Lutheran University.
Course Outline 1. Pengantar Data Mining 2. Proses Data Mining
Data Mining – Intro.
Data Transformation: Normalization
Data Mining: Concepts and Techniques
Data Mining: Concepts and Techniques
Data Mining: Data Preparation
Data Mining Techniques and Applications
Data Preprocessing CENG 514 June 17, 2018.
Noisy Data Noise: random error or variance in a measured variable.
UNIT-2 Data Preprocessing
Data Mining Waqas Haider Bangyal.
Data Mining Modified from
Data Mining – Intro.
Data Mining: Concepts and Techniques (3rd ed.) — Chapter 3 —
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Preprocessing Modified from
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Chapter 1 Data Preprocessing
©Jiawei Han and Micheline Kamber
Data Mining: Concepts and Techniques
Data Transformations targeted at minimizing experimental variance
Lecture 1: Descriptive Statistics and Exploratory
Data Mining Data Preprocessing
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Pre-processing Lecture Notes for Chapter 2
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
Data Preprocessing Copyright, 1996 © Dale Carnegie & Associates, Inc.
CSE591: Data Mining by H. Liu
Presentation transcript:

Data Mining http://www.mmds.org Modified from Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University http://www.mmds.org

Data contains value and knowledge Knowledge discovered from data can be used for competitive advantage Data contains value and knowledge

Data Mining ≈ Big Data ≈ Predictive Analytics ≈ Data Science But to extract the knowledge data needs to be Stored Managed And ANALYZED Data Mining ≈ Big Data ≈ Predictive Analytics ≈ Data Science

What is Data Mining? Given lots of data Data mining is extraction of useful patterns from data sources or models of data Discover patterns and models that are: Valid: hold on new data with some certainty Useful: should be possible to act on the item Unexpected: non-obvious to the system Understandable: humans should be able to interpret the pattern

Data Mining Tasks Descriptive methods Predictive methods Find human-interpretable patterns that describe the data Example: Clustering Predictive methods Use some variables to predict unknown or future values of other variables Example: Recommender systems

Meaningfulness of Analytic Answers A risk with “Data mining” is that an analyst can “discover” patterns that are meaningless Statisticians call it Bonferroni’s principle: Roughly, if you look in more places for interesting patterns than your amount of data will support, you are bound to find bogus

Meaningfulness of Analytic Answers Example: We want to find (unrelated) people who at least twice have stayed at the same hotel on the same day 109 people being tracked 1,000 days Each person stays in a hotel 1% of time (1 day out of 100) Hotels hold 100 people (so 105 hotels) If everyone behaves randomly (i.e., no terrorists) will the data mining detect anything suspicious? Expected number of “suspicious” pairs of people: 250,000 … too many combinations to check – we need to have some additional evidence to find “suspicious” pairs of people in some more efficient way

What matters when dealing with data? Challenges Usage Quality Context Streaming Scalability Collect Prepare Ontologies Text Networks Signals Data Modalities Represent Structured Multimedia Model Reason Visualize Data Operators

Data Mining: Cultures Data mining overlaps with: Different cultures: Databases: Large-scale data, simple queries Machine learning: Small data, Complex models CS Theory: (Randomized) Algorithms Different cultures: To a DB person, data mining is an extreme form of analytic processing – queries that examine large amounts of data Result is the query answer To a ML person, data-mining is the inference of models Result is the parameters of the model CS Theory Machine Learning Data Mining Database systems

Data Mining: Cultures Data Mining overlaps with machine learning, statistics, artificial intelligence, databases but more stress on Scalability (big data) Algorithms Computing architectures Automation for handling large data Statistics Machine Learning Data Mining Database systems

What will we learn? We will learn to mine different types of data: Data is high dimensional Data is a graph Data is infinite/never-ending Data is labeled We will learn to use different models of computation: MapReduce Streams and online algorithms Single machine in-memory

How It All Fits Together High dim. data Locality sensitive hashing Clustering Dimensionality reduction Graph data PageRank, SimRank Community Detection Spam Detection Infinite data Filtering data streams Web advertising Queries on streams Machine learning Support vector machines Decision Trees Perceptron, kNN Apps Recommender systems Association Rules Duplicate document detection

Data Mining: Related Areas Database Management Systems Other Areas: Neural Networks Evolutionary Methods Information Retrieval Etc. The ‘Parent’ of Data Mining Machine Learning Statistics Pattern Recognition Visualization Artificial Intelligence Expert Systems Data Mining Sometimes considered an subarea of AI Sometimes considered an subarea of AI Vijay Raghavan, University of Louisiana Lafayette

How do you want that data? I ♥ data How do you want that data?

Data Preprocessing Modified from Web Data Mining : Exploring Hyperlinks, Contents, and Usage Data Bing Liu University of Illinois at Chicago (UIC)

Why is data mining necessary? Make use of your data assets There is a big gap from stored data to knowledge; and the transition won’t occur automatically. Many interesting things that one wants to find cannot be found using database queries “find people likely to buy my products” “Who are likely to respond to my promotion” “Which movies should be recommended to each customer?”

Classic data mining tasks CS583, Bing Liu, UIC Classic data mining tasks Classification: mining patterns that can classify future (new) data into known classes. Association rule mining mining any rule of the form X  Y, where X and Y are sets of data items. Clustering identifying a set of similarity groups in the data

Classic data mining tasks (contd) Sequential pattern mining: A sequential rule: A B, says that event A will be immediately followed by event B with a certain confidence Deviation detection: discovering the most significant changes in data Data visualization: using graphical methods to show patterns in data.

Data Mining Models and Tasks Vijay Raghavan, University of Louisiana Lafayette

Knowledge Discovery and Data mining (KDD) process Understand the application domain Identify data sources and select target data Pre-processing: cleaning, attribute selection, etc Data mining to extract patterns or models Post-processing: identifying interesting or useful patterns/knowledge Incorporate patterns/knowledge in real world tasks

Knowledge Discovery and Data mining (KDD) process Note, order of initial steps and what is include in each step may vary depending on who you talk to Vijay Raghavan, University of Louisiana Lafayette

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary

Why Data Preprocessing? Data in the real world is dirty incomplete: missing attribute values, e.g., occupation=“” lack of certain attributes of interest, or containing only aggregate data noisy: containing errors or outliers e.g., Salary=“-10” inconsistent: containing discrepancies in codes or names e.g., Age=“42” Birthday=“03/07/1997” e.g., Was rating “1,2,3”, now rating “A, B, C” e.g., discrepancy between duplicate records

Why Is Data Preprocessing Important? No quality data, no quality mining results! Quality decisions must be based on quality data e.g., duplicate or missing data may cause incorrect or even misleading statistics. Data preparation, cleaning, and transformation comprises the majority of the work in a data mining application (90%)

Multi-Dimensional Measure of Data Quality A well-accepted multi-dimensional view: Accuracy Completeness Consistency Timeliness Believability Value added Interpretability Accessibility

Major Tasks in Data Preprocessing Data cleaning Fill in missing values, smooth noisy data, identify or remove outliers and noisy data, and resolve inconsistencies Data integration Integration of multiple databases, or files Data transformation Normalization and aggregation Data reduction Obtains reduced representation in volume but produces the same or similar analytical results Data discretization (for numerical data)

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary

Data Cleaning Importance Data cleaning tasks CS583, Bing Liu, UIC Data Cleaning Importance “Data cleaning is the number one problem in data warehousing” Data cleaning tasks Fill in missing values Identify outliers and smooth out noisy data Correct inconsistent data Resolve redundancy caused by data integration

Missing Data Data is not always available Missing data may be due to CS583, Bing Liu, UIC Missing Data Data is not always available many tuples have no recorded values for several attributes such as customer income in sales data Missing data may be due to equipment malfunction inconsistent with other recorded data and thus deleted data not entered due to misunderstanding certain data may not be considered important at the time of entry not register history or changes of the data

How to Handle Missing Data? CS583, Bing Liu, UIC How to Handle Missing Data? Ignore the tuple Fill in missing values manually: tedious + infeasible? Fill in it automatically with a global constant : e.g., “unknown”, a new class?! the attribute mean the most probable value inference-based such as Bayesian formula, decision tree, or EM algorithm

Noisy Data Noise: random error or variance in a measured variable. CS583, Bing Liu, UIC Noisy Data Noise: random error or variance in a measured variable. Incorrect attribute values may be due to faulty data collection instruments data entry problems data transmission problems etc Other data problems which requires data cleaning duplicate records, incomplete data, inconsistent data

How to Handle Noisy Data? CS583, Bing Liu, UIC How to Handle Noisy Data? Binning method: first sort data and partition into (equi-depth) bins then one can smooth by bin means, bin median, bin boundaries, etc. Clustering detect and remove outliers Combined computer and human inspection detect suspicious values and check by human e.g., deal with possible outliers

Binning Methods for Data Smoothing Sorted data for price (in dollars) 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34 Partition into (equi-depth) bins Bin 1: 4, 8, 9, 15 Bin 2: 21, 21, 24, 25 Bin 3: 26, 28, 29, 34 Smoothing by bin means Bin 1: 9, 9, 9, 9 Bin 2: 23, 23, 23, 23 Bin 3: 29, 29, 29, 29 Smoothing by bin boundaries Bin 1: 4, 4, 4, 15 Bin 2: 21, 21, 25, 25 Bin 3: 26, 26, 26, 34

Outlier Removal Data points inconsistent with the majority of data CS583, Bing Liu, UIC Outlier Removal Data points inconsistent with the majority of data Different outliers Valid: CEO’s salary, Noisy: One’s age = 200, widely deviated points Removal methods Clustering Curve-fitting Hypothesis-testing with a given model

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary

Data Integration Data integration: Schema integration CS583, Bing Liu, UIC Data Integration Data integration: combines data from multiple sources Schema integration integrate metadata from different sources Entity identification problem: identify real world entities from multiple data sources, e.g., A.cust-id  B.cust-# Detecting and resolving data value conflicts for the same real world entity, attribute values from different sources are different, e.g., different scales, metric vs. British units Removing duplicates and redundant data

Data Transformation Smoothing Normalization CS583, Bing Liu, UIC Data Transformation Smoothing remove noise from data Normalization scaled to fall within a small, specified range Attribute/feature construction New attributes constructed from the given ones Aggregation summarization Generalization concept hierarchy climbing

Data Transformation: Normalization CS583, Bing Liu, UIC Data Transformation: Normalization min-max normalization z-score normalization normalization by decimal scaling Where j is the smallest integer such that Max(| |)<1

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary

Data Reduction Strategies CS583, Bing Liu, UIC Data Reduction Strategies Data is too big to work with Data reduction Obtain a reduced representation of the data set that is much smaller in volume but yet produce the same (or almost the same) analytical results Data reduction strategies Dimensionality reduction remove unimportant attributes Aggregation and clustering Sampling

Dimensionality Reduction CS583, Bing Liu, UIC Dimensionality Reduction Feature selection i.e., attribute subset selection Select a minimum set of attributes (features) that is sufficient for the data mining task. Heuristic methods due to exponential # of choices step-wise forward selection step-wise backward elimination combining forward selection and backward elimination etc

Histograms A popular data reduction technique CS583, Bing Liu, UIC Histograms A popular data reduction technique Divide data into buckets and store average (sum) for each bucket

CS583, Bing Liu, UIC Clustering Partition data set into clusters, and one can store cluster representation only Can be very effective if data is clustered but not if data is “smeared” There are many choices of clustering definitions and clustering algorithms

Sampling Choose a representative subset of the data CS583, Bing Liu, UIC Sampling Choose a representative subset of the data Simple random sampling may have poor performance in the presence of skew. Develop adaptive sampling methods Stratified sampling: Approximate the percentage of each class (or subpopulation of interest) in the overall database Used in conjunction with skewed data

CS583, Bing Liu, UIC Sampling Raw Data Cluster/Stratified Sample

Data Preprocessing Why preprocess the data? Data cleaning Data integration and transformation Data reduction Discretization Summary

Discretization Three types of attributes: Discretization: CS583, Bing Liu, UIC Discretization Three types of attributes: Nominal — values from an unordered set Ordinal — values from an ordered set Continuous — real numbers Discretization: divide the range of a continuous attribute into intervals because some data mining algorithms only accept categorical attributes. Some techniques: Binning methods – equal-width, equal-frequency Entropy-based methods

Discretization and Concept Hierarchy CS583, Bing Liu, UIC Discretization and Concept Hierarchy Discretization reduce the number of values for a given continuous attribute by dividing the range of the attribute into intervals. Interval labels can then be used to replace actual data values Concept hierarchies reduce the data by collecting and replacing low level concepts (such as numeric values for the attribute age) by higher level concepts (such as young, middle-aged, or senior)

Binning Attribute values (for one attribute e.g., age): CS583, Bing Liu, UIC Binning Attribute values (for one attribute e.g., age): 0, 4, 12, 16, 16, 18, 24, 26, 28 Equi-width binning – for bin width of e.g., 10: Bin 1: 0, 4 [-,10) bin Bin 2: 12, 16, 16, 18 [10,20) bin Bin 3: 24, 26, 28 [20,+) bin – denote negative infinity, + positive infinity Equi-frequency binning – for bin density of e.g., 3: Bin 1: 0, 4, 12 [-, 14) bin Bin 2: 16, 16, 18 [14, 21) bin Bin 3: 24, 26, 28 [21,+] bin

Entropy-based Given attribute-value/class pairs: CS583, Bing Liu, UIC Entropy-based Given attribute-value/class pairs: (0,P), (4,P), (12,P), (16,N), (16,N), (18,P), (24,N), (26,N), (28,N) Entropy-based binning via binarization: Intuitively, find best split so that the bins are as pure as possible Formally characterized by maximal information gain Let S denote the above 9 pairs, p=4/9 be fraction of P pairs, and n=5/9 be fraction of N pairs Entropy(S) = - p log p - n log n Smaller entropy – set is relatively pure; smallest is 0 Large entropy – set is mixed; Largest is 1

CS583, Bing Liu, UIC Entropy-based Let v be a possible split. Then S is divided into two sets S1: value <= v and S2: value > v Information of the split: I(S1,S2) = (|S1|/|S|) Entropy(S1)+ (|S2|/|S|) Entropy(S2) Information gain of the split: Gain(v,S) = Entropy(S) – I(S1,S2) Goal: split with maximal information gain Possible splits: mid points b/w any two consecutive values For v=14, I(S1,S2) = 0 + 6/9*Entropy(S2) = 6/9 * 0.65 = 0.433 Gain(14,S) = Entropy(S) - 0.433 maximum Gain means minimum I The best split is found after examining all possible splits

Summary Data preparation is a big issue for data mining CS583, Bing Liu, UIC Summary Data preparation is a big issue for data mining Data preparation includes Data cleaning and data integration Data reduction and feature selection Discretization Many methods have been proposed but still an active area of research