Presentation is loading. Please wait.

Presentation is loading. Please wait.

© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.

Similar presentations


Presentation on theme: "© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer."— Presentation transcript:

1 © Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer Science University of Minnesota http://www.cs.umn.edu/~kumar

2 © Vipin Kumar CSci 8980 Fall 2002 2 Mining Associations l Given a set of records, find rules that will predict the occurrence of an item based on the occurrences of other items in the record Market-Basket transactions Example:

3 © Vipin Kumar CSci 8980 Fall 2002 3 Definition of Association Rule Association Rule: Support: Confidence: Example: Goal: Discover all rules having support  minsup and confidence  minconf thresholds.

4 © Vipin Kumar CSci 8980 Fall 2002 4 How to Mine Association Rules? Example of Rules: {Milk,Diaper}  {Beer} (s=0.4, c=0.67) {Milk,Beer}  {Diaper} (s=0.4, c=1.0) {Diaper,Beer}  {Milk} (s=0.4, c=0.67) {Beer}  {Milk,Diaper} (s=0.4, c=0.67) {Diaper}  {Milk,Beer} (s=0.4, c=0.5) {Milk}  {Diaper,Beer} (s=0.4, c=0.5) Observations: All the rules above correspond to the same itemset: {Milk, Diaper, Beer} Rules obtained from the same itemset have identical support but can have different confidence

5 © Vipin Kumar CSci 8980 Fall 2002 5 How to Mine Association Rules? l Two-step approach: 1.Generate all frequent itemsets (sets of items whose support  minsup) 2.Generate high confidence association rules from each frequent itemset  Each rule is a binary partitioning of a frequent itemset l Frequent itemset generation is the more expensive operation

6 © Vipin Kumar CSci 8980 Fall 2002 6 Itemset Lattice There are 2 d possible itemsets

7 © Vipin Kumar CSci 8980 Fall 2002 7 Generating Frequent Itemsets l Naive approach: –Each itemset in the lattice is a candidate frequent itemset –Count the support of each candidate by scanning the database –Complexity ~ O(NM) => Expensive since M = 2 d !!!

8 © Vipin Kumar CSci 8980 Fall 2002 8 Computational Complexity l Given d unique items: –Total number of itemsets = 2 d –Total number of possible association rules: If d=6, R = 602 rules

9 © Vipin Kumar CSci 8980 Fall 2002 9 Approach for Mining Frequent Itemsets l Reduce the number of candidates (M) –Complete search: M=2 d –Use Apriori heuristic to reduce M l Reduce the number of transactions (N) –Reduce size of N as the size of itemset increases –Used by DHP and vertical-based mining algorithms l Reduce the number of comparisons (NM) –Use efficient data structures to store the candidates or transactions –No need to match every candidate against every transaction

10 © Vipin Kumar CSci 8980 Fall 2002 10 Reducing Number of Candidates l Apriori principle: –If an itemset is frequent, then all of its subsets must also be frequent l Apriori principle holds due to the following property of the support measure: –Support of an itemset never exceeds the support of any of its subsets –This is known as the anti-monotone property of support

11 © Vipin Kumar CSci 8980 Fall 2002 11 Using Apriori principle for pruning candidates If an itemset is infrequent, then all of its supersets must also be infrequent Found to be Infrequent Pruned supersets

12 © Vipin Kumar CSci 8980 Fall 2002 12 Illustrating Apriori Principle Items (1-itemsets) Pairs (2-itemsets) (No need to generate candidates involving Coke or Eggs) Triplets (3-itemsets) Minimum Support = 3 If every subset is considered, 6 C 1 + 6 C 2 + 6 C 3 = 41 With support-based pruning, 6 + 6 + 1 = 13

13 © Vipin Kumar CSci 8980 Fall 2002 13 Reducing Number of Comparisons l Candidate counting: –Scan the database of transactions to determine the support of candidate itemsets –To reduce number of comparisons, store the candidates using a hash structure

14 © Vipin Kumar CSci 8980 Fall 2002 14 Association Rule Discovery: Hash tree for fast access 1 5 9 1 4 51 3 6 3 4 53 6 7 3 6 8 3 5 6 3 5 7 6 8 9 2 3 4 5 6 7 1 2 4 4 5 7 1 2 5 4 5 8 1,4,7 2,5,8 3,6,9 Hash Function Candidate Hash Tree Hash on 1, 4 or 7

15 © Vipin Kumar CSci 8980 Fall 2002 15 Association Rule Discovery: Hash tree for fast access 1 5 9 1 4 51 3 6 3 4 53 6 7 3 6 8 3 5 6 3 5 7 6 8 9 2 3 4 5 6 7 1 2 4 4 5 7 1 2 5 4 5 8 1,4,7 2,5,8 3,6,9 Hash Function Candidate Hash Tree Hash on 2, 5 or 8

16 © Vipin Kumar CSci 8980 Fall 2002 16 Association Rule Discovery: Hash tree for fast access 1 5 9 1 4 51 3 6 3 4 53 6 7 3 6 8 3 5 6 3 5 7 6 8 9 2 3 4 5 6 7 1 2 4 4 5 7 1 2 5 4 5 8 1,4,7 2,5,8 3,6,9 Hash Function Candidate Hash Tree Hash on 3, 6 or 9

17 © Vipin Kumar CSci 8980 Fall 2002 17 Candidate Counting l Given a transaction L = {1,2,3,5,6} l Possible subsets of size 3: {1,2,3}{2,3,5}{3,5,6} {1,2,5}{2,3,6} {1,2,6}{2,5,6} {1,3,5} {1,3,6} {1,5,6} l If width of transaction is w, there are 2 w -1 possible non-empty subsets

18 © Vipin Kumar CSci 8980 Fall 2002 18 Association Rule Discovery: Subset Operation 1 5 9 1 4 51 3 6 3 4 53 6 7 3 6 8 3 5 6 3 5 7 6 8 9 2 3 4 5 6 7 1 2 4 4 5 7 1 2 5 4 5 8 1 2 3 5 6 1 +2 3 5 6 3 5 62 + 5 63 + 1,4,7 2,5,8 3,6,9 Hash Function transaction

19 © Vipin Kumar CSci 8980 Fall 2002 19 Association Rule Discovery: Subset Operation… 1 5 9 1 4 5 1 3 6 3 4 5 3 6 7 3 6 8 3 5 6 3 5 7 6 8 92 3 4 5 6 7 1 2 4 4 5 7 1 2 5 4 5 8 1,4,7 2,5,8 3,6,9 Hash Function 1 2 3 5 6 3 5 61 2 + 5 61 3 + 61 5 + 3 5 62 + 5 63 + 1 +2 3 5 6 transaction

20 © Vipin Kumar CSci 8980 Fall 2002 20 Rule Generation l Given a frequent itemset L, find all non-empty subsets f  L such that f  L – f satisfies the minimum confidence requirement l If |L| = k, then there are 2 k – 2 possible association rules (ignoring L   and   L) l In general, confidence does not have an anti- monotonicity property –But rules generated from the same itemset has an anti-monotonicity property –Given L = {A,B,C,D}, c(ABC  D)  c(AB  CD)  c(A  BCD)

21 © Vipin Kumar CSci 8980 Fall 2002 21 Rule Generation for Apriori Algorithm Lattice of rules Lattice corresponds to partial order of items in the rule consequent

22 © Vipin Kumar CSci 8980 Fall 2002 22 Rule Generation for Apriori Algorithm l Candidate rule is generated by merging two rules that share the same prefix in the rule consequent l join(CD=>AB,BD=>AC) would produce the candidate rule D => ABC l Prune rule D=>ABC if its subset AD=>BC does not have high confidence


Download ppt "© Vipin Kumar CSci 8980 Fall 2002 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer."

Similar presentations


Ads by Google