1 CPS 196.03: Information Management and Mining Association Rules and Frequent Itemsets.

Slides:



Advertisements
Similar presentations
Association Rule Mining
Advertisements

Huffman Codes and Asssociation Rules (II) Prof. Sin-Min Lee Department of Computer Science.
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
1 Frequent Itemset Mining: Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored basket-by-basket.
DATA MINING Association Rule Discovery. AR Definition aka Affinity Grouping Common example: Discovery of which items are frequently sold together at a.
Data Mining of Very Large Data
1 Data Mining Introductions What Is It? Cultures of Data Mining.
IDS561 Big Data Analytics Week 6.
 Back to finding frequent itemsets  Typically, data is kept in flat files rather than in a database system:  Stored on disk  Stored basket-by-basket.
1 Mining Associations Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
MIS2502: Data Analytics Association Rule Mining. Uses What products are bought together? Amazon’s recommendation engine Telephone calling patterns Association.
1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.
1 of 25 1 of 45 Association Rule Mining CIT366: Data Mining & Data Warehousing Instructor: Bajuna Salehe The Institute of Finance Management: Computing.
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining, Frequent-Itemset Mining
1 Association Rules Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
1 Improvements to A-Priori Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
Data Mining Association Analysis: Basic Concepts and Algorithms
1 Association Rules Market Baskets Frequent Itemsets A-priori Algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms
Improvements to A-Priori
Asssociation Rules Prof. Sin-Min Lee Department of Computer Science.
Data Mining, Frequent-Itemset Mining. Data Mining Some mining problems Find frequent itemsets in "market-basket" data – "50% of the people who buy hot.
1 CS Data Mining Introductions What Is It? Cultures of Data Mining.
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
1 CS Data Mining Introductions What Is It? Cultures of Data Mining.
1 “Association Rules” Market Baskets Frequent Itemsets A-priori Algorithm.
Data Mining An Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Frequent Itemsets and Association Rules 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 3: Frequent Itemsets.
ASSOCIATION RULE DISCOVERY (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
DATA MINING LECTURE 3 Frequent Itemsets Association Rules.
Supermarket shelf management – Market-basket model:  Goal: Identify items that are bought together by sufficiently many customers  Approach: Process.
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to.
EXAM REVIEW MIS2502 Data Analytics. Exam What Tool to Use? Evaluating Decision Trees Association Rules Clustering.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
Sampling Large Databases for Association Rules Jingting Zeng CIS 664 Presentation March 13, 2007.
Frequent-Itemset Mining. Market-Basket Model A large set of items, e.g., things sold in a supermarket. A large set of baskets, each of which is a small.
Association Rule Mining
ASSOCIATION RULES (MARKET BASKET-ANALYSIS) MIS2502 Data Analytics Adapted from Tan, Steinbach, and Kumar (2004). Introduction to Data Mining.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
CS 345: Topics in Data Warehousing Thursday, November 18, 2004.
1 CPS216: Advanced Database Systems Data Mining Slides created by Jeffrey Ullman, Stanford.
Jeffrey D. Ullman Stanford University.  2% of your grade will be for answering other students’ questions on Piazza.  18% for Gradiance.  Piazza code.
1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Chap 6: Association Rules. Rule Rules!  Motivation ~ recent progress in data mining + warehousing have made it possible to collect HUGE amount of data.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Data Mining Association Analysis: Basic Concepts and Algorithms
Frequent Pattern Mining
William Norris Professor and Head, Department of Computer Science
Frequent Itemsets Association Rules
CPS216: Advanced Database Systems Data Mining
Market Basket Many-to-many relationship between different objects
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Market Baskets Frequent Itemsets A-Priori Algorithm
Hash-Based Improvements to A-Priori
Data Mining Association Rules Assoc.Prof.Songül Varlı Albayrak
Frequent Itemset Mining & Association Rules
Data Mining Association Analysis: Basic Concepts and Algorithms
Market Basket Analysis and Association Rules
Association Analysis: Basic Concepts
Presentation transcript:

1 CPS : Information Management and Mining Association Rules and Frequent Itemsets

2 Let Us Begin with an Example uA common marketing problem: examine what people buy together to discover patterns. 1.What pairs of items are unusually often found together at Kroger checkout? Answer: diapers and beer. 2.What books are likely to be bought by the same Amazon customer?

3 Caveat uA big risk when data mining is that you will “discover” patterns that are meaningless. uStatisticians call it Bonferroni’s principle: (roughly) if you look in more places for interesting patterns than your amount of data will support, you are bound to find “false patterns”.

4 Rhine Paradox --- (1) uDavid Rhine was a parapsychologist in the 1950’s who hypothesized that some people had Extra-Sensory Perception. uHe devised an experiment where subjects were asked to guess 10 hidden cards --- red or blue. uHe discovered that almost 1 in 1000 had ESP --- they were able to get all 10 right!

5 Rhine Paradox --- (2) uHe told these people they had ESP and called them in for another test of the same type. uAlas, he discovered that almost all of them had lost their ESP. uWhat did he conclude? wAnswer on next slide.

6 Rhine Paradox --- (3) uHe concluded that you shouldn’t tell people they have ESP; it causes them to lose it.

7 “Association Rules” Market Baskets Frequent Itemsets A-priori Algorithm

8 The Market-Basket Model uA large set of items, e.g., things sold in a supermarket. uA large set of baskets, each of which is a small set of the items, e.g., the things one customer buys on one day.

9 Association Rule Mining transaction id customer id products bought sales records: Trend: Products p5, p8 often bought together market-basket data

10 Support uSimplest question: find sets of items that appear “frequently” in the baskets. uSupport for itemset I = the number of baskets containing all items in I. uGiven a support threshold s, sets of items that appear in > s baskets are called frequent itemsets.

11 Example uItems={milk, coke, pepsi, beer, juice}. uSupport = 3 baskets. B1 = {m, c, b}B2 = {m, p, j} B3 = {m, b}B4 = {c, j} B5 = {m, p, b}B6 = {m, c, b, j} B7 = {c, b, j}B8 = {b, c} uWhat are the possible itemsets? wThe Lattice of itemsets uHow would you find the frequent itemsets?

12 Example uFrequent itemsets: {m}, {c}, {b}, {j}, {m, b}, {c, b}, {j, c}.

13 Applications --- (1) uReal market baskets: chain stores keep terabytes of information about what customers buy together. wTells how typical customers navigate stores, lets them position tempting items. wSuggests tie-in “tricks,” e.g., run sale on diapers and raise the price of beer. uHigh support needed, or no $$’s.

14 Applications --- (2) u“Baskets” = documents; “items” = words in those documents. wLets us find words that appear together unusually frequently, i.e., linked concepts. u“Baskets” = sentences, “items” = documents containing those sentences. wItems that appear together too often could represent plagiarism.

15 Applications --- (3) u“Baskets” = Web pages; “items” = linked pages. wPairs of pages with many common references may be about the same topic. wEx: think of our two data mining textbooks u“Baskets” = Web pages p ; “items” = pages that link to p. wPages with many of the same links may be mirrors or about the same topic. wEx: think of people with similar interests

16 Important Point u“Market Baskets” is an abstraction that models any many-many relationship between two concepts: “items” and “baskets.” wItems need not be “contained” in baskets. uThe only difference is that we count co- occurrences of items related to a basket, not vice-versa.

17 Scale of Problem uWalMart sells 100,000 items and can store billions of baskets. uThe Web has over 100,000,000 words and billions of pages.

18 Association Rules uIf-then rules about the contents of baskets.  {i 1, i 2,…,i k } → j means: “if a basket contains all of i 1,…,i k then it is likely to contain j. uConfidence of this association rule is the probability of j given i 1,…,i k.

19 Example B1 = {m, c, b}B2 = {m, p, j} B3 = {m, b}B4 = {c, j} B5 = {m, p, b}B6 = {m, c, b, j} B7 = {c, b, j}B8 = {b, c}  An association rule: {m, b} → c. wConfidence = 2/4 = 50%. + _ +

20 Interest uThe interest of an association rule is the absolute value of the amount by which the confidence differs from what you would expect, were items selected independently of one another.

21 Example B1 = {m, c, b}B2 = {m, p, j} B3 = {m, b}B4 = {c, j} B5 = {m, p, b}B6 = {m, c, b, j} B7 = {c, b, j}B8 = {b, c}  For association rule {m, b} → c, item c appears in 5/8 of the baskets. uInterest = | 2/4 - 5/8 | = 1/8 --- not very interesting.

22 Relationships Among Measures uRules with high support and confidence may be useful even if they are not “interesting.” wWe don’t care if buying bread causes people to buy milk, or whether simply a lot of people buy both bread and milk. uBut high interest suggests a cause that might be worth investigating.

23 Finding Association Rules  A typical question: “find all association rules with support ≥ s and confidence ≥ c.” wNote: “support” of an association rule is the support of the set of items it mentions. uHard part: finding the high-support (frequent ) itemsets. wChecking the confidence of association rules involving those sets is relatively easy.

24 Finding Association Rules uTwo-step approach: 1.Frequent Itemset Generation – Generate all itemsets whose support  minsup 2.Rule Generation – Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset uFrequent itemset generation is still computationally expensive

25 Computation Model uTypically, data is kept in a “flat file” rather than a database system. wStored on disk. wStored basket-by-basket. wExpand baskets into pairs, triples, etc. as you read baskets.

26 Computation Model --- (2) uThe true cost of mining disk-resident data is usually the number of disk I/O’s. uIn practice, association-rule algorithms read the data in passes --- all baskets read in turn. uThus, we measure the cost by the number of passes an algorithm takes.

27 Main-Memory Bottleneck uIn many algorithms to find frequent itemsets we need to worry about how main memory is used. wAs we read baskets, we need to count something, e.g., occurrences of pairs. wThe number of different things we can count is limited by main memory. wSwapping counts in/out is a disaster.

28 Finding Frequent Pairs uThe hardest problem often turns out to be finding the frequent pairs. uWe’ll concentrate on how to do that, then discuss extensions to finding frequent triples, etc.

29 The Lattice of ItemSets Given d items, there are 2 d possible candidate itemsets

30 Naïve Algorithm uA simple way to find frequent pairs is: wRead file once, counting in main memory the occurrences of each pair. Expand each basket of n items into its n (n -1)/2 pairs. uFails if #items-squared exceeds main memory.

31 Details of Main-Memory Counting uThere are two basic approaches: 1.Count all item pairs, using a triangular matrix. 2.Keep a table of triples [i, j, c] = the count of the pair of items {i,j } is c. u(1) requires only (say) 4 bytes/pair; (2) requires 12 bytes, but only for those pairs with >0 counts.

32 4 per pair Method (1) Method (2) 12 per occurring pair

33 Details of Approach (1) uNumber items 1,2,… uKeep pairs in the order {1,2}, {1,3},…, {1,n }, {2,3}, {2,4},…,{2,n }, {3,4},…, {3,n },…{n -1,n }. uFind pair {i, j } at the position (i –1)(n –i /2) + j – i. uTotal number of pairs n (n –1)/2; total bytes about 2n 2.

34 Details of Approach (2) uYou need a hash table, with i and j as the key, to locate (i, j, c) triples efficiently. wTypically, the cost of the hash structure can be neglected. uTotal bytes used is about 12p, where p is the number of pairs that actually occur. wBeats triangular matrix if at most 1/3 of possible pairs actually occur.

35 A-Priori Algorithm --- (1) uA two-pass approach called a-priori limits the need for main memory. uKey idea: monotonicity : if a set of items appears at least s times, so does every subset. wContrapositive for pairs: if item i does not appear in s baskets, then no pair including i can appear in s baskets.

36 Found to be Infrequent Illustrating Apriori Principle Pruned supersets

37 u Consider the following market-basket data Market-Basket transactions Illustrating Apriori Principle

38 Illustrating Apriori Principle Items (1-itemsets) Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Minimum Support = 3 If every subset is considered, 6 C C C 3 = 41 With support-based pruning, = 13

39 A-Priori Algorithm --- (2) uPass 1: Read baskets and count in main memory the occurrences of each item. wRequires only memory proportional to #items. uPass 2: Read baskets again and count in main memory only those pairs both of which were found in Pass 1 to be frequent. wRequires memory proportional to square of frequent items only.

40 Picture of A-Priori Item counts Pass 1Pass 2 Frequent items Counts of candidate pairs

41 Detail for A-Priori uYou can use the triangular matrix method with n = number of frequent items. wSaves space compared with storing triples. uTrick: number frequent items 1,2,… and keep a table relating new numbers to original item numbers.

42 Frequent Triples, Etc. uFor each k, we construct two sets of k –tuples: wC k = candidate k – tuples = those that might be frequent sets (support > s ) based on information from the pass for k –1. wL k = the set of truly frequent k –tuples.

43 C1C1 L1L1 C2C2 L2L2 C3C3 Filter Construct First pass Second pass

44 A-Priori for All Frequent Itemsets uOne pass for each k. uNeeds room in main memory to count each candidate k –tuple. uFor typical market-basket data and reasonable support (e.g., 1%), k = 2 requires the most memory.

45 Frequent Itemsets --- (2) uC 1 = all items uL 1 = those counted on first pass to be frequent. uC 2 = pairs, both chosen from L 1. uIn general, C k = k –tuples each k –1 of which is in L k-1.  L k = those candidates with support ≥ s.