Presentation is loading. Please wait.

Presentation is loading. Please wait.

2011 Data Mining Industrial & Information Systems Engineering Chapter 2: Overview of Data Mining Process Pilsung Kang Industrial & Information Systems.

Similar presentations


Presentation on theme: "2011 Data Mining Industrial & Information Systems Engineering Chapter 2: Overview of Data Mining Process Pilsung Kang Industrial & Information Systems."— Presentation transcript:

1 2011 Data Mining Industrial & Information Systems Engineering Chapter 2: Overview of Data Mining Process Pilsung Kang Industrial & Information Systems Engineering Seoul National University of Science & Technology

2 2 2011 Data Mining, IISE, SNUT Data Mining Definition Revisited Extracting useful information from large datasets. (Hand et al., 2001) Data mining is the process of exploration and analysis, by automatic or semi-automatic means, of large quantities of data in order to discover meaningful patterns and rules. (Berry and Linoff, 1997, 2000) Data mining is the process of discovering meaningful new correlations, patterns and trends by sifting through large amount data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques. Gartner Group, 2004)

3 3 2011 Data Mining, IISE, SNUT Descriptive vs. Predictive (purpose)  Look back to the past  To extract compact and easily understood information from large, sometimes gigantic database.  OLAP (online analytical processing), SQL (structured query language).  Look back to the past  To extract compact and easily understood information from large, sometimes gigantic database.  OLAP (online analytical processing), SQL (structured query language).  Predict the future  Identify strong links between variables of data.  To predict the unknown consequence (dependent variable) based on the information provided (independent variable)  y = f(x 1, x 2,..., x n ) + ε  Predict the future  Identify strong links between variables of data.  To predict the unknown consequence (dependent variable) based on the information provided (independent variable)  y = f(x 1, x 2,..., x n ) + ε Descriptive Modeling Predictive Modeling

4 4 2011 Data Mining, IISE, SNUT Supervised vs. Unsupervised (methods)  Goal: predict a single “target” or “outcome” variable.  Finds relations between X and Y.  Train (learn) data where target value is known.  Score data where target value is not known.  Goal: predict a single “target” or “outcome” variable.  Finds relations between X and Y.  Train (learn) data where target value is known.  Score data where target value is not known.  Explores intrinsic characteristics.  Estimates underlying distribution.  Segment data into meaningful groups or detect patterns.  There is no target (outcome) variable to predict or classify.  Explores intrinsic characteristics.  Estimates underlying distribution.  Segment data into meaningful groups or detect patterns.  There is no target (outcome) variable to predict or classify. Supervised Learning Unsupervised Learning

5 5 2011 Data Mining, IISE, SNUT Data Mining Techniques Data Visualization  Graphs and plots of data.  Histograms, boxplots, bar charts, scatterplots.  Especially useful to examine relationships between pairs of variables.  Descriptive & Unsupervised 1

6 6 2011 Data Mining, IISE, SNUT Data Mining Techniques Data Reduction  Distillation of complex/large data into simpler/smaller data.  Reducing the number of variables/columns. Also called dimensionality reduction(variable selection, variable extraction, e.g., principal component analysis)  Reducing the number of records/rows. Also called data compression (e.g., sampling and clustering)  Descriptive & Unsupervised 2

7 7 2011 Data Mining, IISE, SNUT Data Mining Techniques Segmentation/Clustering 3  Goal: divide the entire data into a small number of subgroups.  Homogeneous within groups while heterogeneous between groups.  Examples: Market segmentation, social network analysis.  Descriptive & Unsupervised

8 8 2011 Data Mining, IISE, SNUT Data Mining Techniques Segmentation/Clustering example: hierarchical clustering 3

9 2011 Data Mining, IISE, SNUT Data Mining Techniques 9 Classification  Goal: predict categorical target (outcome) variable.  Examples: Purchase/no purchase, fraud/no fraud, creditworthy/not creditworthy.  Each row is a case/record/instance.  Each column is a variable.  Target variable is often binary (yes/no).  Predictive & Supervised 4

10 10 2011 Data Mining, IISE, SNUT Data Mining Techniques Classification Example: Decision Tree 4

11 11 2011 Data Mining, IISE, SNUT Data Mining Techniques Classification Example: Logistic Regression  Play if 1/(1+exp(-0.2*outlook+0.4*humidity+0.8*windy) > 0.5  Else, do not play 4

12 12 2011 Data Mining, IISE, SNUT Data Mining Techniques Classification Examples “Separate the riding mower buyers( ● ) from non-buyers( ○ )” (x-axis: income(x$1000), y-axis: Lot size (x1000 sqft)) 4

13 2011 Data Mining, IISE, SNUT Data Mining Techniques 13 Prediction  Goal: predict numerical target (outcome) variable.  Examples: sales, revenue, performance.  As in classification:  Each row is a case/record/instance.  Each column is a variable.  Taken together, classification and prediction constitute “predictive analytics”  Predictive & Supervised 5

14 2011 Data Mining, IISE, SNUT Data Mining Techniques 14 Prediction Example: Neural Networks 5

15 15 2011 Data Mining, IISE, SNUT Data Mining Techniques Association Rule  Goal: produce rules that define “what goes with what”  Example: “If X was purchased, Y was also purchased”  Rows are transactions.  Used in recommender systems – “Our records show you bought X, you may also like Y”  Also called “affinity analysis,” or “market basket analysis”  Predictive & Unsupervised 6

16 2011 Data Mining, IISE, SNUT Data Mining Techniques 16 Association Rule Example: Market Basket Analysis Wall Mart (USA) E-Mart (Korea) 6

17 17 2011 Data Mining, IISE, SNUT Data Mining Techniques Novelty Detection  Goal: identify if a new case is similar to the given ‘normal’ cases.  Example: medical diagnosis, fault detection, identity verification.  Each row is a case/record/instance.  Each column is a variable.  No explicit target variable, but assumed that all records have the same target.  Also called “outlier detection,” or “one-class classification”  Predictive & Unsupervised 7

18 18 2011 Data Mining, IISE, SNUT Data Mining Techniques Novelty Detection Example: Keystroke Dynamics-based User Authentication http://ksd.snu.ac.kr 7

19 19 2011 Data Mining, IISE, SNUT Data Mining Techniques Descriptive ModelingPredictive Modeling Supervised Learning Unsupervised Learning … Classification Prediction Data Visualization Data Reduction Segmentation/clustering Association Rules Novelty Detection

20 20 2011 Data Mining, IISE, SNUT Steps in Data Mining 1. Define and understand the purpose of data mining project 2. Formulate the data mining problem 3. Obtain/verify/modify the data 5. Build data mining models 6. Evaluate and interpret the results 7. Deploy and monitor the model 4. Explore and customize the data

21 21 2011 Data Mining, IISE, SNUT Steps in Data Mining Define and understand the purpose of data mining project  Why do we have to conduct this project?  What would be the achievement if the project succeed? 1 (Jun, 2010: http://www.kdnuggets.com)http://www.kdnuggets.com

22 22 2011 Data Mining, IISE, SNUT Steps in Data Mining Formulate the data mining problem  What is the purpose? Increase sales. Detect cancer patients.  What data mining task is appropriate? Classification. Prediction. Association rules, … 2

23 23 2011 Data Mining, IISE, SNUT Steps in Data Mining Obtain/verify/modify the data: Data acquisition  Data source Data warehouse, Data mart, …  Define input variables and target variable if necessary Ex: Churn prediction for credit card service Inputs: age, sex, tenure, amount of spending, risk grade,… Target: whether he/she leaves the company. 3

24 24 2011 Data Mining, IISE, SNUT Steps in Data Mining Obtain/verify/modify the data: Outlier detection  Outlier “A value that the variable cannot have” or “ An extremely rare value” (ex: age 990, height -150cm, …) There are a number of outliers in a real database due to many reasons.  How to deal with outliers? Ignore the record with outliers if total record is sufficient. Replace with another value (mean, median, estimate from a certain pdf, etc) if total records are insufficient. 3

25 25 2011 Data Mining, IISE, SNUT Steps in Data Mining Obtain/verify/modify the data: Missing Value Imputation  Missing value A variable is missing when it has null value in database although it should have a certain real value. Operational errors, human errors.  How to deal with missing values? Ignore the record with missing values if total record is sufficient. Replace with another value (mean, median, estimate from a certain pdf, etc) if total records are insufficient. 3

26 26 2011 Data Mining, IISE, SNUT Steps in Data Mining Obtain/verify/modify the data: Variable handling  Type of variables Binary: 0/1 (ex: benign/malignant in medical diagnosis). Categorical: more than two values, ordered (high, middle, low) or not ordered (ex: color, job). Ordinal: continuous, differences between two consecutive values are not identical (ex: rank of the final exam). Interval: continuous, difference between two consecutive values are identical (ex: age, height, weight). 3

27 27 2011 Data Mining, IISE, SNUT Steps in Data Mining Obtain/verify/modify the data: Variable handling  Variable transformation Binning: interval → binary or ordered categorical. 1-of-C coding: unordered categorical → binary. Low MidHigh “Color: yellow, red, blue, green” d1d2d3 yellow100 red010 blue001 green000 3

28 28 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data Visualization  Single variable 4 Histogram: shows the distribution of a single variable. possible to check the normality. Box plot median quartile 1 “max” “min” outliers mean outlier quartile 3

29 29 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data Visualization 4  Multiple variables Correlation table: indicate which variables are highly (positively or negatively) correlated. Help to remove irrelevant variables or select representative variables

30 30 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data Visualization 4  Multiple variables Scatter plot matrix: Shows the relations between two pairs of variables. Var. 1 Var. 2 Var. 3 Var. 4

31 31 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Dimensionality Reduction 4  Curse of dimensionality The number of records increases exponentially to sustain the same explain ability as the number of variables increases. “If there are various logical ways to explain a certain phenomenon, the simplest is the best” - Occam’s Razor 2 1 =22 2 =42 3 =8

32 32 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Dimensionality Reduction 4  Variable reduction Select a small set of relevant variables. Correlation analysis, Kolmogorov-Sminrov test, … V1V2V3V4V5V6 V110.9-0.80.10.20 V21-0.70.20.1 V31-0.10.1-0.1 V410.90.3 V51-0.9 V61 Select V1 & V4

33 33 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Dimensionality Reduction 4  Variable extraction Construct a new variable that contains more intensive information than original variables. Principal component analysis (PCA), …  Example: Original variables: Age, sex, height, weight Income, property, tax paid Constructed variables: Var1: age+3*I(sex = female)+0.2*height-0.3*weight Var2: Income + 0.1*property + 2*tax paid

34 34 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Instance Reduction 4  Random sampling Select a small set of records with uniformly distributed sampling rate. In classification, class ratios are preserved.  Stratified sampling Select a set of records such that rare events have higher probability to be selected. In classification, class ratios are modified. Under-sampling: preserve minority, reduce majority. Over-sampling: preserve majority, increase minority.

35 35 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data separation 4  Over-fitting Occurs when data mining algorithms ‘memorize’ the given data, even unnecessary (noise, outlier, etc.).

36 36 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data partition 4  Training Data Used to build a model or learn data mining algorithm.  Validation Data Used to select the best parameters for the model.  Test Data Used to select the best model among algorithms considered. Training Data Algorithm A-1 Algorithm A-2 Algorithm A-3 Algorithm B-1 Algorithm B-2 Algorithm B-3 Validation Data Algorithm A-1 Algorithm A-2 Algorithm A-3 Algorithm B-1 Algorithm B-2 Algorithm B-3 Test Data Algorithm A-1 Algorithm A-2 Algorithm A-3 Algorithm B-1 Algorithm B-2 Algorithm B-3

37 37 2011 Data Mining, IISE, SNUT Steps in Data Mining Explore and customize the data: Data normalization 4  Normalization (Standardization) Eliminate the effect caused by different measurement scale or unit. z-score: (value-mean)/(standard deviation). IdAgeIncome 1251,000,000 2352,000,000 3453,000,000 ……… Mean352,000,000 Stdev51,000,000 IdAgeIncome 1-2 200 321 ……… Mean00 Stdev11 Original dataNormalized data

38 38 2011 Data Mining, IISE, SNUT Steps in Data Mining Build data mining models  Data mining algorithm Classification Logistic regression, k-nearest neighbor, naïve bayes, classification trees, neural networks, linear discriminant analysis. Prediction Linear regression, k-nearest neighbor, regression trees, neural networks. Association rules: A priori algorithm. Clustering: Hierarchical clustering, K-Means clustering. 5

39 39 2011 Data Mining, IISE, SNUT Steps in Data Mining Evaluate and interpret the results  Classification performance Confusion matrix Simple accuracy: (A+C)/(A+B+C+D) Balanced correction rate: Lift charts, receiver operating characteristic (ROC) curve, etc. 6 Predicted 1(+)0(-) Actual 1(+) True positive, Sensitivity (A) False negative, Type I error (B) 0(-) False positive, Type II error (C) True negative, Specificity (D)

40 40 2011 Data Mining, IISE, SNUT Steps in Data Mining Evaluate and interpret the results  Prediction performance y: actual target value, y’: predicted target value Mean squared error, Root mean squared error Mean absolute error Mean absolute percentage error 6

41 41 2011 Data Mining, IISE, SNUT Steps in Data Mining Evaluate and interpret the results  Clustering Within variance: variance among record in a single cluster. Between variance: variance between clusters. Good clustering: high between variance and low within variance.  Association rules Support: Confidence: Lift: 6

42 42 2011 Data Mining, IISE, SNUT Steps in Data Mining Deploy and monitor the model  Deployment Integrate the data mining model into operational system. Run the model on real data to produce decisions or actions. “Send Mr. Kang a coupon because his likelihood to leave the company next month is 80%”  Monitoring Evaluate the performance of the model after deployment. Update or redevelop if necessary. 7


Download ppt "2011 Data Mining Industrial & Information Systems Engineering Chapter 2: Overview of Data Mining Process Pilsung Kang Industrial & Information Systems."

Similar presentations


Ads by Google