DATAWAREHOUSING AND DATAMINING

Slides:



Advertisements
Similar presentations
COMP3740 CR32: Knowledge Management and Adaptive Systems
Advertisements

Decision Tree Approach in Data Mining
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Decision Trees Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei Han.
An overview of The IBM Intelligent Miner for Data By: Neeraja Rudrabhatla 11/04/1999.
1 Classification with Decision Trees I Instructor: Qiang Yang Hong Kong University of Science and Technology Thanks: Eibe Frank and Jiawei.
Data Mining By Archana Ketkar.
Data Mining Adrian Tuhtan CS157A Section1.
Data Mining – Intro.
Data Mining: A Closer Look
Chapter 5 Data mining : A Closer Look.
Introduction to Data Mining Data mining is a rapidly growing field of business analytics focused on better understanding of characteristics and.
Lesson Outline Introduction: Data Flood
ISOM Data Mining and Warehousing Arijit Sengupta.
MAKING THE BUSINESS BETTER Presented By Mohammed Dwikat DATA MINING Presented to Faculty of IT MIS Department An Najah National University.
Shilpa Seth.  What is Data Mining What is Data Mining  Applications of Data Mining Applications of Data Mining  KDD Process KDD Process  Architecture.
Copyright R. Weber Machine Learning, Data Mining ISYS370 Dr. R. Weber.
Data Mining Joyeeta Dutta-Moscato July 10, Wherever we have large amounts of data, we have the need for building systems capable of learning information.
Short Introduction to Machine Learning Instructor: Rada Mihalcea.
Knowledge Discovery and Data Mining Evgueni Smirnov.
Lecture 7. Outline 1. Overview of Classification and Decision Tree 2. Algorithm to build Decision Tree 3. Formula to measure information 4. Weka, data.
Knowledge Discovery and Data Mining Evgueni Smirnov.
1 Knowledge Discovery Transparencies prepared by Ho Tu Bao [JAIST] ITCS 6162.
Data Mining – Intro. Course Overview Spatial Databases Temporal and Spatio-Temporal Databases Multimedia Databases Data Mining.
Advanced Database Course (ESED5204) Eng. Hanan Alyazji University of Palestine Software Engineering Department.
Chapter 20 Data Analysis and Mining. 2 n Decision Support Systems  Obtain high-level information out of detailed information stored in (DB) transaction-processing.
1Weka Tutorial 5 - Association © 2009 – Mark Polczynski Weka Tutorial 5 – Association Technology Forge Version 0.1 ?
 Classification 1. 2  Task: Given a set of pre-classified examples, build a model or classifier to classify new cases.  Supervised learning: classes.
Data Mining and Decision Support
1 Classification: predicts categorical class labels (discrete or nominal) classifies data (constructs a model) based on the training set and the values.
Data Mining By Farzana Forhad CS 157B. Agenda Decision Tree and ID3 Rough Set Theory Clustering.
Data Mining Copyright KEYSOFT Solutions.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
DATA MINING TECHNIQUES (DECISION TREES ) Presented by: Shweta Ghate MIT College OF Engineering.
Data Mining Concept Submitted TO: Mrs. MONIKA SUBMITTED BY: SHALU 4717.
Data Mining.
Data Mining – Intro.
DECISION TREES An internal node represents a test on an attribute.
Decision Trees an introduction.
MIS2502: Data Analytics Advanced Analytics - Introduction
Data Science Algorithms: The Basic Methods
Data Mining Association Analysis: Basic Concepts and Algorithms
Classification Algorithms
Prepared by: Mahmoud Rafeek Al-Farra
Artificial Intelligence
Data Mining Jim King.
Data Science Algorithms: The Basic Methods
Frequent Pattern Mining
CSE 711: DATA MINING Sargur N. Srihari Phone: , ext. 113.
William Norris Professor and Head, Department of Computer Science
Waikato Environment for Knowledge Analysis
Adrian Tuhtan CS157A Section1
Classification and Prediction
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Analysis.
Data Mining Association Analysis: Basic Concepts and Algorithms
I don’t need a title slide for a lecture
Prepared by: Mahmoud Rafeek Al-Farra
Clustering.
Classification and Prediction
Market Basket Analysis and Association Rules
Junheng, Shengming, Yunsheng 10/19/2018
©Jiawei Han and Micheline Kamber
Association Analysis: Basic Concepts
Data Mining CSCI 307, Spring 2019 Lecture 18
Data Mining CSCI 307, Spring 2019 Lecture 6
Data Mining CSCI 307, Spring 2019 Lecture 9
Presentation transcript:

DATAWAREHOUSING AND DATAMINING UNIT V DATAWAREHOUSING AND DATAMINING

Steps of Data Mining

Steps of Data Mining There are various steps that are involved in mining data Data Integration: First of all the data are collected and integrated from all the different sources. Data Selection: We may not all the data we have collected in the first step. So in this step we select only those data which we think useful for data mining. Data Cleaning: The data we have collected are not clean and may contain errors, missing values, noisy or inconsistent data. So we need to apply different techniques to get rid of such anomalies. Data Transformation: The data even after cleaning are not ready for mining as we need to transform them into forms appropriate for mining. The techniques used to accomplish this are smoothing, aggregation, normalization etc. Data Mining: Now we are ready to apply data mining techniques on the data to discover the interesting patterns. Techniques like clustering and association analysis are among the many different techniques used for data mining. Pattern Evaluation and Knowledge Presentation: This step involves visualization, transformation, removing redundant patterns etc from the patterns we generated. Decisions / Use of Discovered Knowledge: This step helps user to make use of the knowledge acquired to take better decisions.

Data warehouse DWs are central repositories of integrated data from one or more disparate sources. They store current and historical data and are used for creating analytical reports for knowledge workers throughout the enterprise. Examples of reports could range from annual and quarterly comparisons and trends to detailed daily sales analysis.

Data Mining for Customer Modeling Customer Tasks: attrition prediction targeted marketing: cross-sell, customer acquisition credit-risk fraud detection Industries banking, telecom, retail sales, …

Customer Attrition: Case Study Situation: Attrition rate at for mobile phone customers is around 25-30% a year! Task: Given customer information for the past N months, predict who is likely to attrite next month. Also, estimate customer value and what is the cost-effective offer to be made to this customer.

Customer Attrition Results Verizon Wireless built a customer data warehouse Identified potential attriters Developed multiple, regional models Targeted customers with high propensity to accept the offer Reduced attrition rate from over 2%/month to under 1.5%/month (huge impact, with >30 M subscribers)

Assessing Credit Risk: Case Study Situation: Person applies for a loan Task: Should a bank approve the loan? Note: People who have the best credit don’t need the loans, and people with worst credit are not likely to repay. Bank’s best customers are in the middle

Credit Risk - Results Banks develop credit models using variety of machine learning methods. Mortgage and credit card proliferation are the results of being able to successfully predict if a person is likely to default on a loan Widely deployed in many countries

Successful e-commerce – Case Study A person buys a book (product) at Amazon.com. Task: Recommend other books (products) this person is likely to buy Amazon does clustering based on books bought: customers who bought “Advances in Knowledge Discovery and Data Mining”, also bought “Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations” Recommendation program is quite successful

Major Data Mining Tasks Classification: predicting an item class Clustering: finding clusters in data Associations: e.g. A & B & C occur frequently

Classification Learn a method for predicting the instance class from pre-labeled (classified) instances It is also called as supervised learning Many approaches: Regression, Decision Trees, Bayesian, Neural Networks, ... Given a set of points from classes what is the class of new point ?

Supervised vs. Unsupervised Learning Supervised learning (classification) Supervision: The training data (observations, measurements, etc.) are accompanied by labels indicating the class of the observations New data is classified based on the training set Unsupervised learning (clustering) The class labels of training data is unknown Given a set of measurements, observations, etc. with the aim of establishing the existence of classes or clusters in the data

Classification: Decision Trees if X > 5 then blue else if Y > 3 then blue else if X > 2 then green else blue Y 3 2 5 X

Example : The weather problem Outlook Temperature Humidity Windy Play sunny 85 false no 80 90 true overcast 83 86 yes rainy 70 96 68 65 64 72 95 69 75 81 71 91 Given past data, Can you come up with the rules for Play/Not Play ? What is the game?

The weather problem Conditions for playing Outlook Temperature Humidity Windy Play Sunny Hot High False No True Overcast Yes Rainy Mild Normal … If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes

Weather data with mixed attributes Some attributes have numeric values Outlook Temperature Humidity Windy Play Sunny 85 False No 80 90 True Overcast 83 86 Yes Rainy 75 … If outlook = sunny and humidity > 83 then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity < 85 then play = yes If none of the above then play = yes

A decision tree for this problem outlook rainy sunny overcast humidity yes windy FALSE TRUE high normal yes no no yes

Attribute Selection Splitting Criteria determines the best way to split Measures Information Gain Gain Ratio Gini Index

Which attribute to select?

A criterion for attribute selection Which is the best attribute? The one which will result in the smallest tree Heuristic: choose the attribute that produces the “purest” nodes Popular impurity criterion: information gain Information gain increases with the average purity of the subsets that an attribute produces Strategy: choose attribute that results in greatest information gain witten&eibe

Association Analysis Discovery of Association Rules showing attribute-value conditions that occur frequently together in a set of data, e.g. market basket Given a set of data, find rules that will predict the occurrence of a data item based on the occurrences of other items in the data A rule has the form body ⇒head buys(Omar, “milk”) ⇒ buys(Omar, “sugar”)

Association Analysis

Association Analysis Location Business Type 1 Barber, Bakery, Convenience Store, Meat Shop, Fast Food 2 Bakery, Bookstore, Petrol Pump, Convenience Store, Library, Fast Food 3 Carpenter, Electrician, Barber, Hardware Store, 4 Bakery, Vegetable Market, Flower Shop, Sweets Shop, Meat Shop 5 Convenience Store, Hospital, Pharmacy, Sports Shop, Gym, Fast Food 6 Internet Café, Gym, Games Shop, Shorts Shop, Fast Food, Bakery Association Rule: X Y ; (Fast Food, Bakery)  (Convenience Store) Support S: Fraction of items that contain both X and Y = P(X U Y) S(Fast Food, Bakery, Convenience Store) = 2/6 = .33 Confidence C: how often items in Y appear in locations that contain X = P(X U Y) C[(Fast Food, Bakery)  (Convenience Store)] = P(X U Y) / P(X) = 0.33/0.50 = .66

Association Analysis Given a set of transactions T, the goal of association rule mining is to find all rules having support ≥ minsup threshold confidence ≥ minconf threshold Brute-force approach: List all possible association rules Compute the support and confidence for each rule Prune rules that fail the minsup and minconf thresholds ⇒ Computationally prohibitive!

Association Analysis Location Business Type 1 Barber, Bakery, Convenience Store, Meat Shop, Fast Food, Meat Shop 2 Bakery, Bookstore, Petrol Pump, Convenience Store, Library, Fast Food 3 Carpenter, Electrician, Barber, Hardware Store, Meat Shop 4 Bakery, Vegetable Market, Flower Shop, Sweets Shop, Meat Shop 5 Convenience Store, Hospital, Pharmacy, Sports Shop, Gym, Fast Food 6 Internet Café, Gym, Sweets Shop, Shorts Shop, Fast Food, Bakery Association Rules: (Fast Food, Bakery)  (Convenience Store) Support S: .33 Confidence C: .66 (Convenience Store, Bakery)  (Fast Food) Support S: .33 Confidence C: .50 (Fast Food, Convenience Store)  (Bakery) Support S: .33 Confidence C: .50 (Convenience Store)  (Fast Food, Bakery) Support S: .33 Confidence C: .66 (Fast Food)  (Convenience Store, Bakery) Support S: .33 Confidence C: 1 (Bakery)  (Fast Food, Convenience Store) Support S: .33 Confidence C: .66

Association Analysis Observations Association Rules: (Fast Food, Bakery)  (Convenience Store) Support S: .33 Confidence C: .66 (Convenience Store, Bakery)  (Fast Food) Support S: .33 Confidence C: .50 (Fast Food, Convenience Store)  (Bakery) Support S: .33 Confidence C: .50 (Convenience Store)  (Fast Food, Bakery) Support S: .33 Confidence C: .66 (Fast Food)  (Convenience Store, Bakery) Support S: .33 Confidence C: 1 (Bakery)  (Fast Food, Convenience Store) Support S: .33 Confidence C: .66 Observations  Above rules are binary partitions of given item set  Identical Support but different Confidence  Support and Confidence thresholds may be different

Reducing the number of candidates L1 Scan 1 Business Type Count Barber 2 Bakery Book tore 1 Carpenter Convenience Store 3 Electrician Fast Food Flower Shop Gym Games Shop Hardware Store Hospital Internet Café Library Meat Shop Petrol Pump Pharmacy Sports Shop Sweets Shop Vegetable Market Filter Minimum Occurrences m < 2 Business Type Count Barber 2 Bakery Convenience Store 3 Fast Food Filter Pairs of Two Items; 4C2 = 6 L2 Business Type Count (Barber, Bakery) 1 (Barber, Convenience Store) (Barber, Fast Food) (Bakery, Convenience Store) 2 (Bakery, Fast Food) 3 (Convenience Store, Fast Food) Filter Minimum Occurrences m < 2 Business Type Count (Bakery, Convenience Store) 2 (Bakery, Fast Food) 3 (Convenience Store, Fast Food)

Classification vs Association Rules Classification Rules Focus on one target field Specify class in all cases Measures: Accuracy Association Rules Many target fields Applicable in some cases Measures: Support, Confidence, Lift

CLUSTERING 1. A cluster is a subset of objects which are “similar” 2. A subset of objects such that the distance between any two objects in the cluster is less than the distance between any object in the cluster and any object not located inside it. 3. A connected region of a multidimensional space containing a relatively high density of objects. Unsupervised learning: Finds “natural” grouping of instances given unlabeled data

K means clustering algorithm 1. Initialize the center of the clusters 2. Attribute the closest cluster to each data point 3. Set the position of each cluster to the mean of all data points belonging to that cluster 4. Repeat steps 2-3 until convergence Notation