Download presentation
Presentation is loading. Please wait.
1
DATAWAREHOUSING AND DATAMINING
UNIT V DATAWAREHOUSING AND DATAMINING
2
Steps of Data Mining
3
Steps of Data Mining There are various steps that are involved in mining data Data Integration: First of all the data are collected and integrated from all the different sources. Data Selection: We may not all the data we have collected in the first step. So in this step we select only those data which we think useful for data mining. Data Cleaning: The data we have collected are not clean and may contain errors, missing values, noisy or inconsistent data. So we need to apply different techniques to get rid of such anomalies. Data Transformation: The data even after cleaning are not ready for mining as we need to transform them into forms appropriate for mining. The techniques used to accomplish this are smoothing, aggregation, normalization etc. Data Mining: Now we are ready to apply data mining techniques on the data to discover the interesting patterns. Techniques like clustering and association analysis are among the many different techniques used for data mining. Pattern Evaluation and Knowledge Presentation: This step involves visualization, transformation, removing redundant patterns etc from the patterns we generated. Decisions / Use of Discovered Knowledge: This step helps user to make use of the knowledge acquired to take better decisions.
4
Data warehouse DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data and are used for creating analytical reports for knowledge workers throughout the enterprise. Examples of reports could range from annual and quarterly comparisons and trends to detailed daily sales analysis.
5
Data Mining for Customer Modeling
Customer Tasks: attrition prediction targeted marketing: cross-sell, customer acquisition credit-risk fraud detection Industries banking, telecom, retail sales, …
6
Customer Attrition: Case Study
Situation: Attrition rate at for mobile phone customers is around 25-30% a year! Task: Given customer information for the past N months, predict who is likely to attrite next month. Also, estimate customer value and what is the cost-effective offer to be made to this customer.
7
Customer Attrition Results
Verizon Wireless built a customer data warehouse Identified potential attriters Developed multiple, regional models Targeted customers with high propensity to accept the offer Reduced attrition rate from over 2%/month to under 1.5%/month (huge impact, with >30 M subscribers)
8
Assessing Credit Risk: Case Study
Situation: Person applies for a loan Task: Should a bank approve the loan? Note: People who have the best credit don’t need the loans, and people with worst credit are not likely to repay. Bank’s best customers are in the middle
9
Credit Risk - Results Banks develop credit models using variety of machine learning methods. Mortgage and credit card proliferation are the results of being able to successfully predict if a person is likely to default on a loan Widely deployed in many countries
10
Successful e-commerce – Case Study
A person buys a book (product) at Amazon.com. Task: Recommend other books (products) this person is likely to buy Amazon does clustering based on books bought: customers who bought “Advances in Knowledge Discovery and Data Mining”, also bought “Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations” Recommendation program is quite successful
11
Major Data Mining Tasks
Classification: predicting an item class Clustering: finding clusters in data Associations: e.g. A & B & C occur frequently
12
Classification Learn a method for predicting the instance class from pre-labeled (classified) instances It is also called as supervised learning Many approaches: Regression, Decision Trees, Bayesian, Neural Networks, ... Given a set of points from classes what is the class of new point ?
13
Supervised vs. Unsupervised Learning
Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data
14
Classification: Decision Trees
if X > 5 then blue else if Y > 3 then blue else if X > 2 then green else blue Y 3 2 5 X
15
Example : The weather problem
Outlook Temperature Humidity Windy Play sunny 85 false no 80 90 true overcast 83 86 yes rainy 70 96 68 65 64 72 95 69 75 81 71 91 Given past data, Can you come up with the rules for Play/Not Play ? What is the game?
16
The weather problem Conditions for playing
Outlook Temperature Humidity Windy Play Sunny Hot High False No True Overcast Yes Rainy Mild Normal … If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes
17
Weather data with mixed attributes
Some attributes have numeric values Outlook Temperature Humidity Windy Play Sunny 85 False No 80 90 True Overcast 83 86 Yes Rainy 75 … If outlook = sunny and humidity > 83 then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity < 85 then play = yes If none of the above then play = yes
18
A decision tree for this problem
outlook rainy sunny overcast humidity yes windy FALSE TRUE high normal yes no no yes
19
Attribute Selection Splitting Criteria
determines the best way to split Measures Information Gain Gain Ratio Gini Index
20
Which attribute to select?
21
A criterion for attribute selection
Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets that an attribute produces Strategy: choose attribute that results in greatest information gain witten&eibe
22
Association Analysis Discovery of Association Rules
showing attribute-value conditions that occur frequently together in a set of data, e.g. market basket Given a set of data, find rules that will predict the occurrence of a data item based on the occurrences of other items in the data A rule has the form body ⇒head buys(Omar, “milk”) ⇒ buys(Omar, “sugar”)
23
Association Analysis
24
Association Analysis Location Business Type 1
Barber, Bakery, Convenience Store, Meat Shop, Fast Food 2 Bakery, Bookstore, Petrol Pump, Convenience Store, Library, Fast Food 3 Carpenter, Electrician, Barber, Hardware Store, 4 Bakery, Vegetable Market, Flower Shop, Sweets Shop, Meat Shop 5 Convenience Store, Hospital, Pharmacy, Sports Shop, Gym, Fast Food 6 Internet Café, Gym, Games Shop, Shorts Shop, Fast Food, Bakery Association Rule: X Y ; (Fast Food, Bakery) (Convenience Store) Support S: Fraction of items that contain both X and Y = P(X U Y) S(Fast Food, Bakery, Convenience Store) = 2/6 = .33 Confidence C: how often items in Y appear in locations that contain X = P(X U Y) C[(Fast Food, Bakery) (Convenience Store)] = P(X U Y) / P(X) = 0.33/0.50 = .66
25
Association Analysis Given a set of transactions T, the goal of association rule mining is to find all rules having support ≥ minsup threshold confidence ≥ minconf threshold Brute-force approach: List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds ⇒ Computationally prohibitive!
26
Association Analysis Location Business Type 1
Barber, Bakery, Convenience Store, Meat Shop, Fast Food, Meat Shop 2 Bakery, Bookstore, Petrol Pump, Convenience Store, Library, Fast Food 3 Carpenter, Electrician, Barber, Hardware Store, Meat Shop 4 Bakery, Vegetable Market, Flower Shop, Sweets Shop, Meat Shop 5 Convenience Store, Hospital, Pharmacy, Sports Shop, Gym, Fast Food 6 Internet Café, Gym, Sweets Shop, Shorts Shop, Fast Food, Bakery Association Rules: (Fast Food, Bakery) (Convenience Store) Support S: .33 Confidence C: .66 (Convenience Store, Bakery) (Fast Food) Support S: .33 Confidence C: .50 (Fast Food, Convenience Store) (Bakery) Support S: .33 Confidence C: .50 (Convenience Store) (Fast Food, Bakery) Support S: .33 Confidence C: .66 (Fast Food) (Convenience Store, Bakery) Support S: .33 Confidence C: 1 (Bakery) (Fast Food, Convenience Store) Support S: .33 Confidence C: .66
27
Association Analysis Observations
Association Rules: (Fast Food, Bakery) (Convenience Store) Support S: .33 Confidence C: .66 (Convenience Store, Bakery) (Fast Food) Support S: .33 Confidence C: .50 (Fast Food, Convenience Store) (Bakery) Support S: .33 Confidence C: .50 (Convenience Store) (Fast Food, Bakery) Support S: .33 Confidence C: .66 (Fast Food) (Convenience Store, Bakery) Support S: .33 Confidence C: 1 (Bakery) (Fast Food, Convenience Store) Support S: .33 Confidence C: .66 Observations Above rules are binary partitions of given item set Identical Support but different Confidence Support and Confidence thresholds may be different
28
Reducing the number of candidates
L1 Scan 1 Business Type Count Barber 2 Bakery Book tore 1 Carpenter Convenience Store 3 Electrician Fast Food Flower Shop Gym Games Shop Hardware Store Hospital Internet Café Library Meat Shop Petrol Pump Pharmacy Sports Shop Sweets Shop Vegetable Market Filter Minimum Occurrences m < 2 Business Type Count Barber 2 Bakery Convenience Store 3 Fast Food Filter Pairs of Two Items; 4C2 = 6 L2 Business Type Count (Barber, Bakery) 1 (Barber, Convenience Store) (Barber, Fast Food) (Bakery, Convenience Store) 2 (Bakery, Fast Food) 3 (Convenience Store, Fast Food) Filter Minimum Occurrences m < 2 Business Type Count (Bakery, Convenience Store) 2 (Bakery, Fast Food) 3 (Convenience Store, Fast Food)
29
Classification vs Association Rules
Classification Rules Focus on one target field Specify class in all cases Measures: Accuracy Association Rules Many target fields Applicable in some cases Measures: Support, Confidence, Lift
30
CLUSTERING 1. A cluster is a subset of objects which are “similar”
2. A subset of objects such that the distance between any two objects in the cluster is less than the distance between any object in the cluster and any object not located inside it. 3. A connected region of a multidimensional space containing a relatively high density of objects. Unsupervised learning: Finds “natural” grouping of instances given unlabeled data
31
K means clustering algorithm
1. Initialize the center of the clusters 2. Attribute the closest cluster to each data point 3. Set the position of each cluster to the mean of all data points belonging to that cluster 4. Repeat steps 2-3 until convergence Notation
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.