Data Mining Chapter 2 Association Rule Mining

Slides:



Advertisements
Similar presentations
Mining Associations in Large Databases Monash University Semester 1, March 2006.
Advertisements

Huffman Codes and Asssociation Rules (II) Prof. Sin-Min Lee Department of Computer Science.
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
Frequent Itemset Mining Methods. The Apriori algorithm Finding frequent itemsets using candidate generation Seminal algorithm proposed by R. Agrawal and.
A distributed method for mining association rules
Data Mining Techniques Association Rule
DATA MINING Association Rule Discovery. AR Definition aka Affinity Grouping Common example: Discovery of which items are frequently sold together at a.
LOGO Association Rule Lecturer: Dr. Bo Yuan
IT 433 Data Warehousing and Data Mining Association Rules Assist.Prof.Songül Albayrak Yıldız Technical University Computer Engineering Department
Association Rule Mining. 2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any.
10 -1 Lecture 10 Association Rules Mining Topics –Basics –Mining Frequent Patterns –Mining Frequent Sequential Patterns –Applications.
Association rules The goal of mining association rules is to generate all possible rules that exceed some minimum user-specified support and confidence.
1 of 25 1 of 45 Association Rule Mining CIT366: Data Mining & Data Warehousing Instructor: Bajuna Salehe The Institute of Finance Management: Computing.
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Mining Association Rules. Association rules Association rules… –… can predict any attribute and combinations of attributes … are not intended to be used.
Association Rule Mining Part 2 (under construction!) Introduction to Data Mining with Case Studies Author: G. K. Gupta Prentice Hall India, 2006.
Spring 2003Data Mining by H. Liu, ASU1 5. Association Rules Market Basket Analysis and Itemsets APRIORI Efficient Association Rules Multilevel Association.
Spring 2005CSE 572, CBS 598 by H. Liu1 5. Association Rules Market Basket Analysis and Itemsets APRIORI Efficient Association Rules Multilevel Association.
4/3/01CS632 - Data Mining1 Data Mining Presented By: Kevin Seng.
Association Rules Presented by: Anilkumar Panicker Presented by: Anilkumar Panicker.
6/23/2015CSE591: Data Mining by H. Liu1 Association Rules Transactional data Algorithm Applications.
1 ACCTG 6910 Building Enterprise & Business Intelligence Systems (e.bis) Association Rule Mining Olivia R. Liu Sheng, Ph.D. Emma Eccles Jones Presidential.
Association Rule Mining Part 1 Introduction to Data Mining with Case Studies Author: G. K. Gupta Prentice Hall India, 2006.
Association Rule Mining (Some material adapted from: Mining Sequential Patterns by Karuna Pande Joshi)‏
Fast Algorithms for Association Rule Mining
Mining Association Rules
1 Fast Algorithms for Mining Association Rules Rakesh Agrawal Ramakrishnan Srikant Slides from Ofer Pasternak.
Mining Association Rules in Large Databases. What Is Association Rule Mining?  Association rule mining: Finding frequent patterns, associations, correlations,
Association Discovery from Databases Association rules are a simple formalism for expressing positive connections between columns in a 0/1 matrix. A classical.
Mining Association Rules between Sets of Items in Large Databases presented by Zhuang Wang.
Apriori algorithm Seminar of Popular Algorithms in Data Mining and Machine Learning, TKK Presentation Lauri Lahti.
Association Rules. 2 Customer buying habits by finding associations and correlations between the different items that customers place in their “shopping.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Association Rules. CS583, Bing Liu, UIC 2 Association rule mining Proposed by Agrawal et al in Initially used for Market Basket Analysis to find.
ASSOCIATION RULE DISCOVERY (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
M. Sulaiman Khan Dept. of Computer Science University of Liverpool 2009 COMP527: Data Mining Association Rule Mining March 5, 2009.
Expert Systems with Applications 34 (2008) 459–468 Multi-level fuzzy mining with multiple minimum supports Yeong-Chyi Lee, Tzung-Pei Hong, Tien-Chin Wang.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Association Rule Mining Data Mining and Knowledge Discovery Prof. Carolina Ruiz and Weiyang Lin Department of Computer Science Worcester Polytechnic Institute.
Is Sampling Useful in Data Mining? A Case in the Maintenance of Discovered Association Rules S.D. Lee, David W. Cheung, Ben Kao The University of Hong.
Data Mining Find information from data data ? information.
Association Rule Mining
ASSOCIATION RULES (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
Data Mining (and machine learning) The A Priori Algorithm.
CMU SCS : Multimedia Databases and Data Mining Lecture #30: Data Mining - assoc. rules C. Faloutsos.
Elsayed Hemayed Data Mining Course
Chapter 8 Association Rules. Data Warehouse and Data Mining Chapter 10 2 Content Association rule mining Mining single-dimensional Boolean association.
Chap 6: Association Rules. Rule Rules!  Motivation ~ recent progress in data mining + warehousing have made it possible to collect HUGE amount of data.
Data Mining Association Rules Mining Frequent Itemset Mining Support and Confidence Apriori Approach.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Mining Association Rules in Large Database This work is created by Dr. Anamika Bhargava, Ms. Pooja Kaul, Ms. Priti Bali and Ms. Rajnipriya Dhawan and licensed.
Data Mining Find information from data data ? information.
Knowledge discovery & data mining Association rules and market basket analysis--introduction UCLA CS240A Course Notes*
William Norris Professor and Head, Department of Computer Science
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Rules Assoc.Prof.Songül Varlı Albayrak
Data Mining Association Analysis: Basic Concepts and Algorithms
Unit 3 MINING FREQUENT PATTERNS ASSOCIATION AND CORRELATIONS
Market Basket Analysis and Association Rules
Association Analysis: Basic Concepts
Presentation transcript:

Data Mining Chapter 2 Association Rule Mining G. K. Gupta Faculty of Information Technology Monash University

Introduction As noted earlier, huge amount of data is stored electronically in many retail outlets due to barcoding of goods sold. Natural to try to find some useful information from this mountains of data. A conceptually simple yet interesting technique is to find association rules from these large databases. The problem was invented by Rakesh Agarwal at IBM. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc. November 2008 ©GKGupta

Introduction Association rules mining (or market basket analysis) searches for interesting customer habits by looking at associations. The classical example is the one where a store in USA was reported to have discovered that people buying nappies tend also to buy beer. Applications in marketing, store layout, customer segmentation, medicine, finance, and many more. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc. November 2008 ©GKGupta

A Simple Example Consider the following ten transactions of a shop selling only nine items. We will return to it after explaining some terminology. November 2008 ©GKGupta

Terminology Let the number of different items sold be n Let the number of transactions be N Let the set of items be {i1, i2, …, in}. The number of items may be large, perhaps several thousands. Let the set of transactions be {t1, t2, …, tN}. Each transaction ti contains a subset of items from the itemset {i1, i2, …, in}. These are the things a customer buys when they visit the supermarket. N is assumed to be large, perhaps in millions. Not considering the quantities of items bought. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc. November 2008 ©GKGupta

Terminology Want to find a group of items that tend to occur together frequently. The association rules are often written as X→Y meaning that whenever X appears Y also tends to appear. X and Y may be single items or sets of items but the same item does not appear in both. Basket analysis is useful in determining what products customers are likely to purchase together. The analysis can also be useful in determining what purchases are made together over a period of time. An example might be that a person who has bought a lounge suit is likely to buy a TV next. This type of analysis is useful in marketing (e.g. coupning and advertising), store layout etc. November 2008 ©GKGupta

Terminology Suppose X and Y appear together in only 1% of the transactions but whenever X appears there is 80% chance that Y also appears. The 1% presence of X and Y together is called the support (or prevalence) of the rule and 80% is called the confidence (or predictability) of the rule. These are measures of interestingness of the rule. Confidence denotes the strength of the association between X and Y. Support indicates the frequency of the pattern. A minimum support is necessary if an association is going to be of some business value. It is important to define the concept of lift which is commonly used in mail-order marketing. Suppose 5% of customers to a store buy TV but customers who buy a lounge suite are much more likely to buy a TV. Let that likelihood be 20%. The ratio of 20% to 5% (i.e. 4) is called lift. Lift essentially measures the strength of the relationship X=>Y. November 2008 ©GKGupta

Terminology Let the chance of finding an item X in the N transactions is x% then we can say probability of X is P(X) = x/100 since probability values are always between 0.0 and 1.0. Now suppose we have two items X and Y with probabilities of P(X) = 0.2 and P(Y) = 0.1. What does the product P(X) x P(Y) mean? What is likely to be the chance of both items X and Y appearing together, that is P(X and Y) = P(X U Y)? November 2008 ©GKGupta

Terminology Suppose we know P(X U Y) then what is the probability of Y appearing if we know that X already exists. It is written as P(Y|X) The support for X→Y is the probability of both X and Y appearing together, that is P(X U Y). The confidence of X→Y is the conditional probability of Y appearing given that X exists. It is written as P(Y|X) and read as P of Y given X. November 2008 ©GKGupta

Terminology Sometime the term lift is also used. Lift is defined as Support(X U Y)/P(X)P(Y). P(X)P(Y) is the probability of X and Y appearing together if both X and Y appear randomly. As an example, if support of X and Y is 1%, and X appears 4% in the transactions while Y appears in 2%, then lift = 0.01/0.04x0.02 = 1.25. What does it tell us about X and Y? What if the lift was 1? November 2008 ©GKGupta

The task Want to find all associations which have at least p% support with at least q% confidence such that all rules satisfying any user constraints are found the rules are found efficiently from large databases the rules found are actionable As an example, let us consider that a furniture and appliances store has 100,000 records of sale. Let 5,000 records contain lounge suites (5%), Let 10,000 records contain TVs (10%). It is expected that the 5,000 records containing lounge suites will contain 10% TVs (i.e. 500). If the number of TV sales in the 5,000 records is in fact 2,000 then we have a lift of 4 or confidence of 40%. November 2008 ©GKGupta

An Example Consider a furniture and appliances store has 100,000 records of sale. Let 5,000 records contain lounge suites (5%), and let 10,000 records contain TVs (10%). The support for lounge suites is therefore 5% and for TVs 10%. It is expected that the 5,000 records containing lounge suites will contain 10% TVs (i.e. 500). If the number of TV sales in the 5,000 records is in fact 2,500 then we have a lift of 5 or confidence of the rule Lounge→TV is 50%. November 2008 ©GKGupta

Question With 100,000 records, 5,000 records containing lounge suites (5%), and 10,000 containing TVs (10%) and the number of TV sales in the 5,000 records containing lounge suite being 2,500, what is the support and the confidence in the following two rules? lounge → TV and TV → lounge November 2008 ©GKGupta

Associations It is worth noting: The lift of X → Y is the same as that of Y → X (why?). Would the confidence be the same? The only rules of interest are with very high or very low lift (why?) Items that appear on most transactions are of little interest (why?) Similar items should be combined to reduce the number of total items (why?) November 2008 ©GKGupta

Applications Although we are only considering basket analysis, the technique has many other applications as noted earlier. For example we may have many patients coming to a hospital with various diseases and from various backgrounds. We therefore have a number of attributes (items) and we may be interested in knowing which attributes appear together and may be which attributes are frequently associated to some particular disease. Associations can also determine rules like "A customer buying shares in Microsoft is also likely to buy shares in AOL". X=>Y type rules have a left-hand side (called antecedent) and a right-hand side called (consequent). It is assumed that X and Y have no common items. Associations could also be considered as a type of clustering November 2008 ©GKGupta

Example We have 9 items and 10 transactions Example We have 9 items and 10 transactions. Find support and confidence for each pair. How many pairs are there? November 2008 ©GKGupta

Example There are 9 x 8 or 72 pairs, half of them duplicates. 36 is too many to analyse in a class, so we use an even simpler example of n = 4 (Bread, Cheese, Juice, Milk) and N = 4. We want to find rules with minimum support of 50% and minimum confidence of 75%. November 2008 ©GKGupta

Example N = 4 results in the following frequencies. All items have the 50% support we require. Only two pairs have the 50% support and no 3-itemset has the support. We will look at the two pairs that have the minimum support to find if they have the required confidence. November 2008 ©GKGupta

Terminology Items or itemsets that have the minimum support are called frequent. In our example, all the four items and two pairs are frequent. We will now determine if the two pairs {Bread, Cheese} and {Cheese, Juice} lead to association rules with 75% confidence. Every pair {A, B} can lead to two rules A → B and B → A if both satisfy the minimum confidence. Confidence of A → B is given by the support for A and B together divided by the support of A. November 2008 ©GKGupta

Rules We have four possible rules and their confidence is given as follows: Bread  Cheese with confidence of 2/3 = 67% Cheese  Bread with confidence of 2/3 = 67% Cheese  Juice with confidence of 2/3 = 67% Juice  Cheese with confidence of 100% Therefore only the last rule Juice  Cheese has confidence above the minimum 75% and qualifies. Rules that have more than user-specified minimum confidence are called confident. November 2008 ©GKGupta

Problems with Brute Force This simple algorithm works well with four items since there were only a total of 16 combinations that we needed to look at but if the number of items is say 100, the number of combinations is much larger, in billions. The number of combinations becomes about a million with 20 items since the number of combinations is 2n with n items (why?). The naïve algorithm can be improved to deal more effectively with larger data sets. November 2008 ©GKGupta

Problems with Brute Force Can you think of an improvement over the brute force method in which we looked at every itemset combination possible? Naïve algorithms even with improvements don’t work efficiently enough to deal with large number of items and transactions. We now define a better algorithm. November 2008 ©GKGupta

The Apriori Algorithm To find associations, this classical Apriori algorithm may be simply described by a two step approach: Step 1 ─ discover all frequent (single) items that have support above the minimum support required Step 2 ─ use the set of frequent items to generate the association rules that have high enough confidence level Is this a reasonable algorithm? A more formal description is given on the next slide. November 2008 ©GKGupta

The Apriori Algorithm The step-by-step algorithm may be described as follows Computing L1 This is Step 1. Scan all transactions. Find all frequent items that have support above the required p%. Let all of these frequent items be labeled L1. Apriori-gen Function This is Step 2. Use the frequent items L1 to build all possible item pairs like {Bread, Cheese} if Bread and Cheese are in L1. The set of these item pairs is called C2, the candidate set. November 2008 ©GKGupta

The Apriori Algorithm Pruning This is Step 3. Scan all transactions and find all frequent pairs in the candidate pair setC2. Let these frequent pairs be L2. General rule This is Step 4, a generalization of Step 2. Build candidate set of k items Ck by combining frequent itemsets in the set Lk-1. November 2008 ©GKGupta

The Apriori Algorithm Pruning Step 5, generalization of Step 3. Scan all transactions and find all frequent item sets in Ck. Let these frequent itemsets be Lk. Continue with Step 4 unless Lk is empty. Stop if Lk is empty November 2008 ©GKGupta

An Example This example is similar to the last example but we have added two more items and another transaction making five transactions and six items. We want to find association rules with 50% support and 75% confidence. November 2008 ©GKGupta

Example First find L1. 50% support requires that each frequent item appear in at least three transactions. Therefore L1 is given by: November 2008 ©GKGupta

Example The candidate 2-itemsets or C2 therefore has six pairs (why?). These pairs and their frequencies are: November 2008 ©GKGupta

Deriving Rules L2 has only two frequent item pairs {Bread, Juice} and {Cheese, Juice}. After these two frequent pairs, there are no candidate 3-itemsets (why?) since we do not have two 2-itemsets that have the same first item (why?). The two frequent pairs lead to the following possible rules: Bread → Juice Juice → Bread Cheese → Juice Juice → Cheese November 2008 ©GKGupta

Deriving Rules The confidence of these rules is obtained by dividing the support for both items in the rule by the support of the item on the left hand side of the rule. The confidence of the four rules therefore are 3/4 = 75% (Bread → Juice) 3/4 = 75% (Juice → Bread) 3/3 = 100% (Cheese → Juice) 3/4 = 75% (Juice → Cheese) Since all of them have a minimum 75% confidence, they all qualify. We are done since there are no 3– itemsets. November 2008 ©GKGupta

Questions How do we compute C3 from L2 or more generally Ci from Li-1? If there were several members of the set C3, how do we derive L3 by pruning C3? Assume that the 3-itemset {A, B, M} has the minimum support. What do we do next? How do we derive association rules? November 2008 ©GKGupta

Answers To compute C3 from L2, or more generally Ci from Li-1, we join members of Li-1 to other members in Li-1 by first sorting the itemsets in their lexicographic order and then joining those itemsets in which the first (i-2) items are common. Observe that if an itemset in C3 is (a, b, c) then L2 must have had itemsets (a, b) and (a, c) since all subsets of C3 must be frequent. Why? To repeat, itemsets in a candidate set Ci or a frequent set Li will be frequent only if every subset is frequent. November 2008 ©GKGupta

Answers For example, {A, B, M} will be frequent only if {A, B}, {A,M} and {B, M} are frequent, which in turn requires each of A, B and M also to be frequent. November 2008 ©GKGupta

Example If the set {A, B, M} has the minimum support, we can find association rules that satisfy the conditions of support and confidence by first generating all nonempty subsets of {A, B, M} and using each of it on the LHS and remaining symbols on the RHS. Subsets of {A, B, M} are A, B, M, AB, AM, BM. Therefore possible rules involving all three items are A → BM, B → AM, M → AB, AB → M, AM → B and BM → A. Now we need to test their confidence. November 2008 ©GKGupta

Answers To test the confidence of the possible rules, we proceed as we have done before. We know that confidence(A → B) = P(B|A) = P(A U B)/ P(A) This confidence is the likelihood of finding B if A already has been found in a transaction. It is the ratio of the support for A and B together and support for A by itself. The confidence of all these rules can thus be computed. November 2008 ©GKGupta

Efficiency Consider an example of a supermarket database which might have several thousand items including 1000 frequent items and several million transactions. Which part of the apriori algorithm will be most expensive to compute? Why? November 2008 ©GKGupta

Efficiency The algorithm to construct the candidate set Ci to find the frequent set Li is crucial to the performance of the Apriori algorithm. The larger the candidate set, higher the processing cost of discovering the frequent itemsets since the transactions must be scanned for each. Given that the numbers of early candidate itemsets are very large, the initial iterations dominate the cost. In a supermarket database with about 1000 frequent items, there will be almost half a million candidate pairs C2 that need to be searched for to find L2. November 2008 ©GKGupta

Efficiency Generally the number of frequent pairs out of half a million pairs will be small and therefore (why?) the number of 3-itemsets should be small. Therefore, it is the generation of the frequent set L2 that is the key to improving the performance of the Apriori algorithm. November 2008 ©GKGupta

Comment In class examples we usually require high support, for example, 25%, 30% or even 50%. These support values are very high if the number of items and number of transactions is large. For example, 25% support in a supermarket transaction database means searching for items that have been purchased by one in four customers! Not many items would probably qualify. Practical applications therefore deal with much smaller support, sometimes even down to 1% or lower. November 2008 ©GKGupta

Improving the Apriori Algorithm Many techniques for improving the efficiency have been proposed: Pruning (already mentioned) Hashing based technique Transaction reduction Partitioning Sampling Dynamic itemset counting November 2008 ©GKGupta

Pruning Pruning can reduce the size of the candidate set Ck. We want to transform Ck into a set of frequent items Lk. To reduce the work of checking, we may use the rule that all subsets of Ck must also be frequent. November 2008 ©GKGupta

Example Suppose the items are A, B, C, D, E, F, .., X, Y, Z Suppose L1 is A, C, E, P, Q, S, T, V, W, X Suppose L2 is {A, C}, {A, F}, {A, P}, {C, P}, {E, P}, {E, G}, {E, V}, {H, J}, {K, M}, {Q, S}, {Q, X} Are you able to identify errors in the L2 list? What is C3? How to prune C3? C3 is {A, C, P}, {E, P, V}, {Q, S, X} November 2008 ©GKGupta

Hashing The direct hashing and pruning (DHP) algorithm attempts to generate large itemsets efficiently and reduces the transaction database size. When generating L1, we also generate all the 2-itemsets for each transaction, hash them in a hash table and keep a count. November 2008 ©GKGupta

Example 1 2 5 7 3 4 6 11 13 14 19 12 {1,2} --> hash address 2 How does hashing work for this example? Let us look at possible 2-itemsets from the first transaction. {1,2} --> hash address 2 {1,5} --> 5 {1,7} --> 7 {2,5} --> 2 {2,7} --> 6 {5,7} --> 3 We have used a simple hashing function which involves multiplying the two item numbers and then mod 8. Note there are collisions. 1 2 5 7 3 4 6 11 13 14 19 12 November 2008 ©GKGupta

Transaction Reduction As discussed earlier, any transaction that does not contain any frequent k-itemsets cannot contain any frequent (k+1)-itemsets and such a transaction may be marked or removed. November 2008 ©GKGupta

Example Frequent items (L1) are A, B, D, M, T. We are not able to use these to eliminate any transactions since all transactions have at least one of the items in L1. The frequent pairs (C2) are {A,B} and {B,M}. How can we reduce transactions using these? TID Items bought 001 B, M, T, Y 002 B, M 003 T, S, P 004 A, B, C, D 005 A, B 006 T, Y, E 007 A, B, M 008 B, C, D, T, P 009 D, T, S 010 November 2008 ©GKGupta

Partitioning The set of transactions may be divided into a number of disjoint subsets. Then each partition is searched for frequent itemsets. These frequent itemsets are called local frequent itemsets. How can information about local itemsets be used in finding frequent itemsets of the global set of transactions? In the example on the next slide, we have divided a set of transactions into two partitions. Find the frequent items sets for each partition. Are these local frequent itemsets useful? November 2008 ©GKGupta

Example 1 2 5 7 3 4 6 11 13 14 19 12 2 4 6 11 13 5 7 1 3 November 2008 ©GKGupta

Partitioning Phase 1 Divide n transactions into m partitions Find the frequent itemsets in each partition Combine all local frequent itemsets to form candidate itemsets Phase 2 Find global frequent itemsets November 2008 ©GKGupta

Sampling A random sample (usually large enough to fit in the main memory) may be obtained from the overall set of transactions and the sample is searched for frequent itemsets. These frequent itemsets are called sample frequent itemsets. How can information about sample itemsets be used in finding frequent itemsets of the global set of transactions? November 2008 ©GKGupta

Sampling Not guaranteed to be accurate but we sacrifice accuracy for efficiency. A lower support threshold may be used for the sample to ensure not missing any frequent datasets. The actual frequencies of the sample frequent itemsets are then obtained. More than one sample could be used to improve accuracy. November 2008 ©GKGupta

Problems with Association Rules Algorithms Users are overwhelmed by the number of rules identified ─ how can the number of rules be reduced to those that are relevant to the user needs? Apriori algorithm assumes sparsity since number of items on each record is quite small. Some applications produce dense data which may also have many frequently occurring items strong correlations many items on each record November 2008 ©GKGupta

Problems with Association Rules Also consider: AB → C (90% confidence) and A → C (92% confidence) Clearly the first rule is of no use. We should look for more complex rules only if they are better than simple rules. November 2008 ©GKGupta

Bibliography R. Agarwal, T. Imielinski, and A. Swami, Mining Association Rules Between sets of Items in Large Databases, In Proc of the ACM SIGMOD, 1993, pp. 207-216. R. Ramakrishnan and J. Gehrke, Database management systems,, 2nd ed. McGraw-Hill, 2000. M. J. A. Berry and G. Linoff, Mastering data mining : the art and science of customer relationship management, Wiley, 2000. I. H. Witten and E. Frank, Data mining: practical machine learning tools and techniques with Java implementations,. Morgan Kaufmann, 2000. November 2008 ©GKGupta

Bibliography M. J. A. Berry and G. Linoff, Data mining techniques: for marketing, sales, and customer support, New York : Wiley, 1997. U. M. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy (eds.), Advances in Knowledge Discovery and Data Mining, AAAI/MIT Press, 1996. R. Agarwal, M. Mehta, J. Shafer, A. Arning, and T. Bollinger, The Quest Data Mining System, Proc 1996 Int. Conf on Data Mining and Knowledge Discovery (KDD’96), Portland, Oregon, pp. 244-249, Aug 1996. November 2008 ©GKGupta