1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.

Slides:



Advertisements
Similar presentations
1 CPS : Information Management and Mining Association Rules and Frequent Itemsets.
Advertisements

Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
1 Frequent Itemset Mining: Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored basket-by-basket.
Data Mining Techniques Association Rule
Data Mining of Very Large Data
IDS561 Big Data Analytics Week 6.
 Back to finding frequent itemsets  Typically, data is kept in flat files rather than in a database system:  Stored on disk  Stored basket-by-basket.
1 Mining Associations Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
Association Rule Mining. Mining Association Rules in Large Databases  Association rule mining  Algorithms Apriori and FP-Growth  Max and closed patterns.
Association rules The goal of mining association rules is to generate all possible rules that exceed some minimum user-specified support and confidence.
1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm.
1 of 25 1 of 45 Association Rule Mining CIT366: Data Mining & Data Warehousing Instructor: Bajuna Salehe The Institute of Finance Management: Computing.
Frequent Item Mining.
Chapter 5: Mining Frequent Patterns, Association and Correlations
Data Mining: Concepts and Techniques (2nd ed.) — Chapter 5 —
732A02 Data Mining - Clustering and Association Analysis ………………… Jose M. Peña Association rules Apriori algorithm FP grow algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Association Rules Mining Part III. Multiple-Level Association Rules Items often form hierarchy. Items at the lower level are expected to have lower support.
1 Association Rules Apriori Algorithm. 2 Computation Model uTypically, data is kept in a flat file rather than a database system. wStored on disk. wStored.
Data Mining Association Analysis: Basic Concepts and Algorithms
1 Improvements to A-Priori Park-Chen-Yu Algorithm Multistage Algorithm Approximate Algorithms Compacting Results.
Data Mining Association Analysis: Basic Concepts and Algorithms
1 Association Rules Market Baskets Frequent Itemsets A-priori Algorithm.
Association Analysis: Basic Concepts and Algorithms.
1 Association Rule Mining Instructor Qiang Yang Thanks: Jiawei Han and Jian Pei.
Chapter 4: Mining Frequent Patterns, Associations and Correlations
Mining Association Rules in Large Databases
Improvements to A-Priori
Mining Association Rules in Large Databases
Asssociation Rules Prof. Sin-Min Lee Department of Computer Science.
Association Rule Mining - MaxMiner. Mining Association Rules in Large Databases  Association rule mining  Algorithms Apriori and FP-Growth  Max and.
Mining Association Rules
Mining Association Rules
Association Rule Mining. Mining Association Rules in Large Databases  Association rule mining  Algorithms Apriori and FP-Growth  Max and closed patterns.
Performance and Scalability: Apriori Implementation.
Mining Association Rules in Large Databases. What Is Association Rule Mining?  Association rule mining: Finding frequent patterns, associations, correlations,
1 “Association Rules” Market Baskets Frequent Itemsets A-priori Algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
What Is Association Mining? l Association rule mining: – Finding frequent patterns, associations, correlations, or causal structures among sets of items.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Frequent Itemsets and Association Rules 1 Wu-Jun Li Department of Computer Science and Engineering Shanghai Jiao Tong University Lecture 3: Frequent Itemsets.
DATA MINING LECTURE 3 Frequent Itemsets Association Rules.
Note to other teachers and users of these slides: We would be delighted if you found this our material useful in giving your own lectures. Feel free to.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Association Rule Mining III COMP Seminar GNET 713 BCB Module Spring 2007.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Sampling Large Databases for Association Rules Jingting Zeng CIS 664 Presentation March 13, 2007.
Data Mining Find information from data data ? information.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
1 CPS216: Advanced Database Systems Data Mining Slides created by Jeffrey Ullman, Stanford.
Mining Frequent Patterns, Associations, and Correlations Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Jeffrey D. Ullman Stanford University.  2% of your grade will be for answering other students’ questions on Piazza.  18% for Gradiance.  Piazza code.
Data Mining  Association Rule  Classification  Clustering.
The UNIVERSITY of KENTUCKY Association Rule Mining CS 685: Special Topics in Data Mining Spring 2009.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
The UNIVERSITY of KENTUCKY Association Rule Mining CS 685: Special Topics in Data Mining.
Reducing Number of Candidates Apriori principle: – If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due.
Data Mining Association Rules Mining Frequent Itemset Mining Support and Confidence Apriori Approach.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Association Rule Mining CS 685: Special Topics in Data Mining Jinze Liu.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Association Rule Mining COMP Seminar BCB 713 Module Spring 2011.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
CS685: Special Topics in Data Mining The UNIVERSITY of KENTUCKY Frequent Itemset Mining II Tree-based Algorithm Max Itemsets Closed Itemsets.
Reducing Number of Candidates
Association Rule Mining
Frequent Itemsets Association Rules
CPS216: Advanced Database Systems Data Mining
Market Baskets Frequent Itemsets A-Priori Algorithm
Hash-Based Improvements to A-Priori
Association Rule Mining
Frequent-Pattern Tree
Presentation transcript:

1 Association Rules Market Baskets Frequent Itemsets A-Priori Algorithm

2 The Market-Basket Model uA large set of items, e.g., things sold in a supermarket. uA large set of baskets, each of which is a small set of the items, e.g., the things one customer buys on one day.

3 Market-Baskets – (2) uReally a general many-many mapping (association) between two kinds of things. wBut we ask about connections among “items,” not “baskets.” uThe technology focuses on common events, not rare events.

4 Support uSimplest question: find sets of items that appear “frequently” in the baskets. uSupport for itemset I = the number of baskets containing all items in I. wSometimes given as a percentage. uGiven a support threshold s, sets of items that appear in at least s baskets are called frequent itemsets.

5 Example: Frequent Itemsets uItems={milk, coke, pepsi, beer, juice}. uSupport = 3 baskets. B 1 = {m, c, b}B 2 = {m, p, j} B 3 = {m, b}B 4 = {c, j} B 5 = {m, p, b}B 6 = {m, c, b, j} B 7 = {c, b, j}B 8 = {b, c} uFrequent itemsets: {m}, {c}, {b}, {j},, {b,c}, {c,j}. {m,b}

6 Applications – (1) uItems = products; baskets = sets of products someone bought in one trip to the store. uExample application: given that many people buy beer and diapers together: wRun a sale on diapers; raise price of beer. uOnly useful if many buy diapers & beer.

7 Applications – (2) uBaskets = sentences; items = documents containing those sentences. uItems that appear together too often could represent plagiarism. uNotice items do not have to be “in” baskets.

8 Applications – (3) uBaskets = Web pages; items = words. uUnusual words appearing together in a large number of documents, e.g., “Brad” and “Angelina,” may indicate an interesting relationship.

9 Aside: Words on the Web uMany Web-mining applications involve words. 1.Cluster pages by their topic, e.g., sports. 2.Find useful blogs, versus nonsense. 3.Determine the sentiment (positive or negative) of comments. 4.Partition pages retrieved from an ambiguous query, e.g., “jaguar.”

10 Words – (2)  Very common words are stop words.  They rarely help determine meaning, and they block from view interesting events, so ignore them.  The TF/IDF measure distinguishes “important” words from those that are usually not meaningful.

11 Words – (3) TF/IDF = “term frequency, inverse document frequency”: relates the number of times a word appears to the number of documents in which it appears. wLow values are words like “also” that appear at random. wHigh values are words like “computer” that may be the topic of documents in which it appears at all.

12 Scale of the Problem uWalMart sells 100,000 items and can store billions of baskets. uThe Web has billions of words and many billions of pages.

13 Association Rules uIf-then rules about the contents of baskets.  {i 1, i 2,…,i k } → j means: “if a basket contains all of i 1,…,i k then it is likely to contain j.” uConfidence of this association rule is the probability of j given i 1,…,i k.

14 Example: Confidence B 1 = {m, c, b}B 2 = {m, p, j} B 3 = {m, b}B 4 = {c, j} B 5 = {m, p, b}B 6 = {m, c, b, j} B 7 = {c, b, j}B 8 = {b, c}  An association rule: {m, b} → c. wConfidence = 2/4 = 50%. + _ +

15 Finding Association Rules  Question: “find all association rules with support ≥ s and confidence ≥ c.” wNote: “support” of an association rule is the support of the set of items on the left. uHard part: finding the frequent itemsets.  Note: if {i 1, i 2,…,i k } → j has high support and confidence, then both {i 1, i 2,…,i k } and {i 1, i 2,…,i k,j } will be “frequent.”

16 Computation Model uTypically, data is kept in flat files rather than in a database system. wStored on disk. wStored basket-by-basket. wExpand baskets into pairs, triples, etc. as you read baskets. Use k nested loops to generate all sets of size k.

17 File Organization Item Basket 1 Basket 2 Basket 3 Etc. Example: items are positive integers, and boundaries between baskets are –1.

18 Computation Model – (2) uThe true cost of mining disk-resident data is usually the number of disk I/O’s. uIn practice, association-rule algorithms read the data in passes – all baskets read in turn. uThus, we measure the cost by the number of passes an algorithm takes.

19 Main-Memory Bottleneck uFor many frequent-itemset algorithms, main memory is the critical resource. wAs we read baskets, we need to count something, e.g., occurrences of pairs. wThe number of different things we can count is limited by main memory. wSwapping counts in/out is a disaster (why?).

20 Finding Frequent Pairs uThe hardest problem often turns out to be finding the frequent pairs. wWhy? Often frequent pairs are common, frequent triples are rare. Why? Probability of being frequent drops exponentially with size; number of sets grows more slowly with size. uWe’ll concentrate on pairs, then extend to larger sets.

21 Naïve Algorithm uRead file once, counting in main memory the occurrences of each pair. wFrom each basket of n items, generate its n (n -1)/2 pairs by two nested loops. uFails if (#items) 2 exceeds main memory. wRemember: #items can be 100K (Wal- Mart) or 10B (Web pages).

22 Example: Counting Pairs uSuppose 10 5 items. uSuppose counts are 4-byte integers. uNumber of pairs of items: 10 5 ( )/2 = 5*10 9 (approximately). uTherefore, 2*10 10 (20 gigabytes) of main memory needed.

23 Details of Main-Memory Counting uTwo approaches: 1.Count all pairs, using a triangular matrix. 2.Keep a table of triples [i, j, c] = “the count of the pair of items {i, j } is c.” u(1) requires only 4 bytes/pair. wNote: always assume integers are 4 bytes. u(2) requires 12 bytes, but only for those pairs with count > 0.

24 4 per pair Method (1) Method (2) 12 per occurring pair

25 Triangular-Matrix Approach – (1) uNumber items 1, 2,… wRequires table of size O(n) to convert item names to consecutive integers. uCount {i, j } only if i < j. uKeep pairs in the order {1,2}, {1,3},…, {1,n }, {2,3}, {2,4},…,{2,n }, {3,4},…, {3,n },…{n -1,n }.

26 Triangular-Matrix Approach – (2) uFind pair {i, j } at the position (i –1)(n –i /2) + j – i. uTotal number of pairs n (n –1)/2; total bytes about 2n 2.

27 Details of Approach #2 uTotal bytes used is about 12p, where p is the number of pairs that actually occur. wBeats triangular matrix if at most 1/3 of possible pairs actually occur. uMay require extra space for retrieval structure, e.g., a hash table.

28 A-Priori Algorithm for Pairs uA two-pass approach called a-priori limits the need for main memory. uKey idea: monotonicity : if a set of items appears at least s times, so does every subset. wContrapositive for pairs: if item i does not appear in s baskets, then no pair including i can appear in s baskets.

29 A-Priori Algorithm – (2) uPass 1: Read baskets and count in main memory the occurrences of each item. wRequires only memory proportional to #items. uItems that appear at least s times are the frequent items.

30 A-Priori Algorithm – (3) uPass 2: Read baskets again and count in main memory only those pairs both of which were found in Pass 1 to be frequent. wRequires memory proportional to square of frequent items only (for counts), plus a list of the frequent items (so you know what must be counted).

31 Picture of A-Priori Item counts Pass 1Pass 2 Frequent items Counts of pairs of frequent items

32 Detail for A-Priori uYou can use the triangular matrix method with n = number of frequent items. wMay save space compared with storing triples. uTrick: number frequent items 1,2,… and keep a table relating new numbers to original item numbers.

33 A-Priori Using Triangular Matrix for Counts Item counts Pass 1Pass 2 1. Freq- Old 2. quent item … items #’s Counts of pairs of frequent items

34 Frequent Triples, Etc. uFor each k, we construct two sets of k -sets (sets of size k ): wC k = candidate k -sets = those that might be frequent sets (support > s ) based on information from the pass for k –1. wL k = the set of truly frequent k -sets.

35 C1C1 L1L1 C2C2 L2L2 C3C3 Filter Construct First pass Second pass All items All pairs of items from L 1 Count the pairs To be explained Count the items Frequent items Frequent pairs

36 A-Priori for All Frequent Itemsets uOne pass for each k. uNeeds room in main memory to count each candidate k -set. uFor typical market-basket data and reasonable support (e.g., 1%), k = 2 requires the most memory.

37 Frequent Itemsets – (2) uC 1 = all items  In general, L k = members of C k with support ≥ s. uC k +1 = (k +1) -sets, each k of which is in L k.

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 Data base D Scan D

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D ItemsetSup a2 b3 c3 e3 Freq 1-itemsets Itemset ab ac ae bc be ce 2-candidates ItemsetSup ab1 ac2 ae1 bc2 be3 ce2 Counting Scan D ItemsetSup ac2 bc2 be3 ce2 Freq 2-itemsets Itemset bce 3-candidates ItemsetSup bce2 Freq 3-itemsets Scan D

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D ItemsetSup a2 b3 c3 e3 Freq 1-itemsets Itemset ab ac ae bc be ce 2-candidates

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D ItemsetSup a2 b3 c3 e3 Freq 1-itemsets Itemset ab ac ae bc be ce 2-candidates ItemsetSup ab1 ac2 ae1 bc2 be3 ce2 Counting Scan D

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D ItemsetSup a2 b3 c3 e3 Freq 1-itemsets Itemset ab ac ae bc be ce 2-candidates ItemsetSup ab1 ac2 ae1 bc2 be3 ce2 Counting Scan D ItemsetSup ac2 bc2 be3 ce2 Freq 2-itemsets Itemset bce 3-candidates ItemsetSup bce2 Freq 3-itemsets Scan D

Apriori Algorithm - General uAgrawal & Srikant 1994 TIDItems 10a, c, d 20b, c, e 30a, b, c, e 40b, e Min_sup=2 ItemsetSup a2 b3 c3 d1 e3 Data base D 1-candidates Scan D ItemsetSup a2 b3 c3 e3 Freq 1-itemsets Itemset ab ac ae bc be ce 2-candidates ItemsetSup ab1 ac2 ae1 bc2 be3 ce2 Counting Scan D ItemsetSup ac2 bc2 be3 ce2 Freq 2-itemsets Itemset bce 3-candidates ItemsetSup bce2 Freq 3-itemsets Scan D

Important Details of Apriori uHow to generate candidates? wStep 1: self-joining L k wStep 2: pruning uHow to count supports of candidates?

How to Generate Candidates? uSuppose the items in L k-1 are listed in an order uStep 1: self-join L k-1 INSERT INTO C k SELECT p.item 1, p.item 2, …, p.item k-1, q.item k-1 FROM L k-1 p, L k-1 q WHERE p.item 1 =q.item 1, …, p.item k-2 =q.item k-2, p.item k-1 < q.item k-1 uStep 2: pruning wFor each itemset c in C k do For each (k-1)-subsets s of c do if (s is not in L k-1 ) then delete c from C k

Example of Candidate- generation uL 3 ={abc, abd, acd, ace, bcd} uSelf-joining: L 3 *L 3 wabcd from abc and abd wacde from acd and ace uPruning: wacde is removed because ade is not in L 3 uC 4 ={abcd}

How to Count Supports of Candidates? uWhy counting supports of candidates a problem? wThe total number of candidates can be very huge wOne transaction may contain many candidates uMethod: wCandidate itemsets are stored in a hash-tree wLeaf node of hash-tree contains a list of itemsets and counts wInterior node contains a hash table wSubset function: finds all the candidates contained in a transaction

Apriori: Candidate Generation- and-test uAny subset of a frequent itemset must be also frequent — an anti-monotone property wA transaction containing {beer, diaper, nuts} also contains {beer, diaper} w{beer, diaper, nuts} is frequent  {beer, diaper} must also be frequent uNo superset of any infrequent itemset should be generated or tested wMany item combinations can be pruned

The Apriori Algorithm uC k : Candidate itemset of size k uL k : frequent itemset of size k uL 1 = {frequent items}; ufor (k = 1; L k !=  ; k++) do wC k+1 = candidates generated from L k ; wfor each transaction t in database do increment the count of all candidates in C k+1 that are contained in t wL k+1 = candidates in C k+1 with min_support ureturn  k L k ;

Challenges of FPM uChallenges wMultiple scans of transaction database wHuge number of candidates wTedious workload of support counting for candidates uImproving Apriori: general ideas wReduce number of transaction database scans wShrink number of candidates wFacilitate support counting of candidates

DIC: Reduce Scans ABCD ABC ABDACD BCD ABACBC AD BDCD A BCD {} Itemset lattice uOnce both A and D are determined frequent, the counting of AD can begin uOnce all length-2 subsets of BCD are determined frequent, the counting of BCD can begin Transactions 1-itemsets 2-itemsets … Apriori 1-itemsets 2-items 3-itemsDIC S. Brin R. Motwani, J. Ullman, and S. Tsur, 1997.

DHP: Reduce the Number of Candidates uA hashing bucket count <min_sup  every candidate in the buck is infrequent wCandidates: a, b, c, d, e wHash entries: {ab, ad, ae} {bd, be, de} … wLarge 1-itemset: a, b, d, e wThe sum of counts of {ab, ad, ae} < min_sup  ab should not be a candidate 2-itemset uJ. Park, M. Chen, and P. Yu, 1995

Partition: Scan Database Only Twice uPartition the database into n partitions uItemset X is frequent  X is frequent in at least one partition wScan 1: partition database and find local frequent patterns wScan 2: consolidate global frequent patterns uA. Savasere, E. Omiecinski, and S. Navathe, 1995

Sampling for Frequent Patterns uSelect a sample of original database, mine frequent patterns within sample using Apriori uScan database once to verify frequent itemsets found in sample, only borders of closure of frequent patterns are checked wExample: check abcd instead of ab, ac, …, etc. uScan database again to find missed frequent patterns uH. Toivonen, 1996

Bottleneck of Frequent-pattern Mining uMultiple database scans are costly uMining long patterns needs many passes of scanning and generates lots of candidates wTo find frequent itemset i 1 i 2 …i 100 # of scans: 100 # of Candidates: wBottleneck: candidate-generation-and-test uCan we avoid candidate generation?

Set Enumeration Tree uSubsets of I can be enumerated systematically wI={a, b, c, d}  abcd abacadbcbdcd abcabdacdbcd abcd

Borders of Frequent Itemsets uConnected wX and Y are frequent and X is an ancestor of Y  all patterns between X and Y are frequent  abcd abacadbcbdcd abcabdacdbcd abcd

Projected Databases uTo find a child Xy of X, only X-projected database is needed wThe sub-database of transactions containing X wItem y is frequent in X-projected database  abcd abacadbcbdcd abcabdacdbcd abcd

Tree-Projection Method uFind frequent 2-itemsets uFor each frequent 2-itemset xy, form a projected database wThe sub-database containing xy uRecursive mining wIf x’y’ is frequent in xy-proj db, then xyx’y’ is a frequent pattern

Borders and Max-patterns uMax-patterns: borders of frequent patterns wA subset of max-pattern is frequent wA superset of max-pattern is infrequent  abcd abacadbcbdcd abcabdacdbcd abcd

MaxMiner: Mining Max- patterns u1st scan: find frequent items wA, B, C, D, E u2nd scan: find support for wAB, AC, AD, AE, ABCDE wBC, BD, BE, BCDE wCD, CE, CDE, DE, uSince BCDE is a max-pattern, no need to check BCD, BDE, CDE in later scan uBaya’98 TidItems 10A,B,C,D,E 20B,C,D,E, 30A,C,D,F Min_sup=2 Potential max-patterns

Frequent Closed Patterns uFor frequent itemset X, if there exists no item y s.t. every transaction containing X also contains y, then X is a frequent closed pattern w“acdf” is a frequent closed pattern uConcise rep. of freq pats uReduce # of patterns and rules uN. Pasquier et al. In ICDT’99 TIDItems 10a, c, d, e, f 20a, b, e 30c, e, f 40a, c, d, f 50c, e, f Min_sup=2

CLOSET: Mining Frequent Closed Patterns uFlist: list of all freq items in support asc. order wFlist: d-a-f-e-c uDivide search space wPatterns having d wPatterns having d but no a, etc. uFind frequent closed pattern recursively wEvery transaction having d also has cfa  cfad is a frequent closed pattern uPHM’00 TIDItems 10a, c, d, e, f 20a, b, e 30c, e, f 40a, c, d, f 50c, e, f Min_sup=2

Closed and Max-patterns uClosed pattern mining algorithms can be adapted to mine max-patterns wA max-pattern must be closed uDepth-first search methods have advantages over breadth-first search ones

Multiple-level Association Rules uItems often form hierarchy uFlexible support settings: Items at the lower level are expected to have lower support. uTransaction database can be encoded based on dimensions and levels uexplore shared multi-level mining uniform support Milk [support = 10%] 2% Milk [support = 6%] Skim Milk [support = 4%] Level 1 min_sup = 5% Level 2 min_sup = 5% Level 1 min_sup = 5% Level 2 min_sup = 3% reduced support

Multi-dimensional Association Rules uSingle-dimensional rules: buys(X, “milk”)  buys(X, “bread”) uMD rules:  2 dimensions or predicates wInter-dimension assoc. rules (no repeated predicates) age(X,”19-25”)  occupation(X,“student”)  buys(X,“coke”) whybrid-dimension assoc. rules (repeated predicates) age(X,”19-25”)  buys(X, “popcorn”)  buys(X, “coke”) uCategorical Attributes: finite number of possible values, no order among values uQuantitative Attributes: numeric, implicit order

Quantitative/Weighted Association Rules age(X,”33-34”)  income(X,”30K - 50K”)  buys(X,”high resolution TV”) Numeric attributes are dynamically discretized maximize the confidence or compactness of the rules 2-D quantitative association rules: A quan1  A quan2  A cat Cluster “adjacent” association rules to form general rules using a 2-D grid k 60-70k 50-60k 40-50k 30-40k 20-30k <20k Income Age