Download presentation
Presentation is loading. Please wait.
Published byAllen Eaton Modified over 9 years ago
1
Knowledge Discovery & Data Mining process of extracting previously unknown, valid, and actionable (understandable) information from large databases Data mining is a step in the KDD process of applying data analysis and discovery algorithms Machine learning, pattern recognition, statistics, databases, data visualization. Traditional techniques may be inadequate –large data
2
Why Mine Data? Huge amounts of data being collected and warehoused –Walmart records 20 millions transactions per day –WebLogs Millions of hits per day on major sites –health care transactions: multi-gigabyte databases –Mobil Oil: geological data of over 100 terabytes Affordable computing Competitive pressure –gain an edge by providing improved, customized services –information as a product in its own right
3
Knowledge discovery in databases (KDD) is the non-trivial process of identifying valid, potentially useful and ultimately understandable patterns in data Clean, Collect, Summarize Data Warehouse Data Preparation Training Data Mining Model Patterns Verification, Evaluation Operational Databases
4
Data mining Pattern –12121? –’12’ pattern is found often enough So, with some confidence we can say ‘?’ is 2 –“If ‘1’ then ‘2’ follows” –Pattern Model Confidence –121212? –12121231212123121212? –121212 3 Models are created using historical data by detecting patterns. It is a calculated guess about likelihood of repetition of pattern.
5
Note: Models and patterns: A pattern can be thought of as an instantiation of a model. Eg. f(x) = 3 x 2 + x is a pattern whereas f(x) = ax 2 + bx is considered a model. Data mining involves fitting models to and determining patterns from observed data.
6
Data Mining Prediction Methods –using some variables to predict unknown or future values of other variables –It uses database fields (predictors) for prediction model, using the field values we can make predictions Descriptive Methods –finding human-interpretable patterns describing the data
7
Data Mining Techniques Classification Clustering Association Rule Discovery Sequential Pattern Discovery Regression Deviation Detection
8
Classification Data defined in terms of attributes, one of which is the class Find a model for class attribute as a function of the values of other(predictor) attributes, such that previously unseen records can be assigned a class as accurately as possible. Training Data: used to build the model Test data: used to validate the model (determine accuracy of the model) Given data is usually divided into training and test sets.
9
Classification Given old data about customers and payments, predict new applicant’s loan eligibility. Age Salary Profession Location Customer type Previous customersClassifierDecision rules Salary > 5 L Prof. = Exec New applicant’s data Good/ bad
10
Classification methods Goal: Predict class Ci = f(x1, x2,.. Xn) Regression: (linear or any other polynomial) Decision tree classifier: divide decision space into piecewise constant regions. Neural networks: partition by non-linear boundaries Probabilistic/generative models Lazy learning methods: nearest neighbor
11
Tree where internal nodes are simple decision rules on one or more attributes and leaf nodes are predicted class labels. Decision trees Salary < 50 K Prof = teacher Good Age < 30 Bad Good
12
Classification:Example
13
Decision Tree Training Dataset
14
Output: A Decision Tree for “buys_computer” age? overcast student?credit rating? noyes fair excellent <=30 >40 no yes 30..40
15
Algorithm for Decision Tree Induction Basic algorithm (a greedy algorithm) –Tree is constructed in a top-down recursive divide-and-conquer manner –At start, all the training examples are at the root –Attributes are categorical (if continuous-valued, they are discretized in advance) –Examples are partitioned recursively based on selected attributes –Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) Conditions for stopping partitioning –All samples for a given node belong to the same class –There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf –There are no samples left
16
Attribute Selection Measure: Information Gain (ID3/C4.5) Select the attribute with the highest information gain S contains s i tuples of class C i for i = {1, …, m} information measures info required to classify any arbitrary tuple entropy of attribute A with values {a 1,a 2,…,a v } information gained by branching on attribute A
17
Attribute Selection by Information Gain Computation Class P: buys_computer = “yes” Class N: buys_computer = “no” I(p, n) = I(9, 5) =0.940 Compute the entropy for age: means “age <=30” has 5 out of 14 samples, with 2 yes’es and 3 no’s. Hence Similarly,
18
Classification: Direct Marketing Goal: Reduce cost of soliciting (mailing) by targeting a set of consumers likely to buy a new product. Data –for similar product introduced earlier –we know which customers decided to buy and which did not {buy, not buy} class attribute –collect various demographic, lifestyle, and company related information about all such customers - as possible predictor variables. Learn classifier model
19
Classification: Fraud detection Goal: Predict fraudulent cases in credit card transactions. Data –Use credit card transactions and information on its account-holder as input variables –label past transactions as fraud or fair. Learn a model for the class of transactions Use the model to detect fraud by observing credit card transactions on a given account.
20
Clustering Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that –data points in one cluster are more similar to one another –data points in separate clusters are less similar to one another. Similarity measures –Euclidean distance, if attributes are continuous –Problem specific measures
21
Clustering: Market Segmentation Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix. Approach: –collect different attributes on customers based on geographical, and lifestyle related information –identify clusters of similar customers –measure the clustering quality by observing buying patterns of customers in same cluster vs. those from different clusters.
22
Association Rule Discovery Given a set of records, each of which contain some number of items from a given collection –produce dependency rules which will predict occurrence of an item based on occurences of other items
23
Association Rule: Basic Concepts Given: (1) database of transactions, (2) each transaction is a list of items (purchased by a customer in a visit) Find: all rules that correlate the presence of one set of items with that of another set of items –E.g., 98% of people who purchase tires and auto accessories also get automotive services done Applications –* Maintenance Agreement (What the store should do to boost Maintenance Agreement sales) –Home Electronics * (What other products should the store stocks up?) –Attached mailing in direct marketing
24
Association Rule: Basic Concepts number of tuples containing both A and B Support (A B) = ----------------------------------------------- total number of tuples number of tuples containing both A and B Confidence (A B) = ----------------------------------------- total number of tuples containg A
25
Rule Measures: Support and Confidence Find all the rules X & Y Z with minimum confidence and support –support, s, probability that a transaction contains {X, Y, Z} –confidence, c, conditional probability that a transaction having {X, Y} also contains Z Let minimum support 50%, and minimum confidence 50%, we have –A C (50%, 66.6%) –C A (50%, 100%) Customer buys d Customer buys both Customer buys b
26
Mining Association Rules—An Example For rule A C: support = support({A, C}) = 50% confidence = support({A, C})/support({A}) = 66.6% Min. support 50% Min. confidence 50%
27
Association Rules:Application Marketing and Sales Promotion: Consider discovered rule: {Bagels, … } --> {Potato Chips} –Potato Chips as consequent: can be used to determine what may be done to boost sales –Bagels as an antecedent: can be used to see which products may be affected if bagels are discontinued –Can be used to see which products should be sold with Bagels to promote sale of Potato Chips
28
Association Rules: Application Supermarket shelf management Goal: to identify items which are bought together (by sufficiently many customers) Approach: process point-of-sale data (collected with barcode scanners) to find dependencies among items. Example –If a customer buys Diapers and Milk, then he is very likely to buy Beer –so stack six-packs next to diapers?
29
Sequential Pattern Discovery Given: set of objects, each associated with its own timeline of events, find rules that predict strong sequential dependencies among different events, of the form (A B) (C) (D E) --> (F) xg :max allowed time between consecutive event-sets ng: min required time between consecutive event sets ws: window-size, max time difference between earliest and latest events in an event-set (events within an event-set may occur in any order) ms: max allowed time between earliest and latest events of the sequence.
30
Sequential Pattern Discovery: Examples Sequences in which customers purchase goods/services Understanding long term customer behavior -- timely promotions. In point-of--sale transaction sequences –Computer bookstore: (Intro to Visual C++) (C++ Primer) --> (Perl for Dummies, Tcl/Tk) –Athletic Apparel Store: (Shoes) (Racket, Racquetball) --> (Sports Jacket)
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.