Association Rules & Sequential Patterns. CS583, Bing Liu, UIC 2 Road map Basic concepts of Association Rules Apriori algorithm Sequential pattern mining.

Slides:



Advertisements
Similar presentations
Chapter 2: Association Rules & Sequential Patterns
Advertisements

Huffman Codes and Asssociation Rules (II) Prof. Sin-Min Lee Department of Computer Science.
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
Data Mining Techniques Association Rule
LOGO Association Rule Lecturer: Dr. Bo Yuan
Association Rule Mining. 2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any.
10 -1 Lecture 10 Association Rules Mining Topics –Basics –Mining Frequent Patterns –Mining Frequent Sequential Patterns –Applications.
Association rules The goal of mining association rules is to generate all possible rules that exceed some minimum user-specified support and confidence.
1 of 25 1 of 45 Association Rule Mining CIT366: Data Mining & Data Warehousing Instructor: Bajuna Salehe The Institute of Finance Management: Computing.
Rakesh Agrawal Ramakrishnan Srikant
Chapter 5: Mining Frequent Patterns, Association and Correlations
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
732A02 Data Mining - Clustering and Association Analysis ………………… Jose M. Peña Association rules Apriori algorithm FP grow algorithm.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Spring 2003Data Mining by H. Liu, ASU1 5. Association Rules Market Basket Analysis and Itemsets APRIORI Efficient Association Rules Multilevel Association.
Spring 2005CSE 572, CBS 598 by H. Liu1 5. Association Rules Market Basket Analysis and Itemsets APRIORI Efficient Association Rules Multilevel Association.
Association Analysis: Basic Concepts and Algorithms.
Data Mining Association Analysis: Basic Concepts and Algorithms
6/23/2015CSE591: Data Mining by H. Liu1 Association Rules Transactional data Algorithm Applications.
1 ACCTG 6910 Building Enterprise & Business Intelligence Systems (e.bis) Association Rule Mining Olivia R. Liu Sheng, Ph.D. Emma Eccles Jones Presidential.
2/8/00CSE 711 data mining: Apriori Algorithm by S. Cha 1 CSE 711 Seminar on Data Mining: Apriori Algorithm By Sung-Hyuk Cha.
Fast Algorithms for Association Rule Mining
Mining Association Rules
1 Fast Algorithms for Mining Association Rules Rakesh Agrawal Ramakrishnan Srikant Slides from Ofer Pasternak.
Mining Frequent Patterns I: Association Rule Discovery Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Association Rule Mining. Mining Association Rules in Large Databases  Association rule mining  Algorithms Apriori and FP-Growth  Max and closed patterns.
Mining Association Rules in Large Databases. What Is Association Rule Mining?  Association rule mining: Finding frequent patterns, associations, correlations,
Pattern Recognition Lecture 20: Data Mining 3 Dr. Richard Spillman Pacific Lutheran University.
Ch5 Mining Frequent Patterns, Associations, and Correlations
1 CISC 4631 Data Mining Lecture 09: Association Rule Mining Theses slides are based on the slides by Tan, Steinbach and Kumar (textbook authors) Prof.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Association Rules. CS583, Bing Liu, UIC 2 Association rule mining Proposed by Agrawal et al in Initially used for Market Basket Analysis to find.
Chapter 2: Association Rules & Sequential Patterns.
Chapter 2: Association Rules & Sequential Patterns.
1 Outline Criticism to support/confidence Loglinear modeling Casual modeling.
9/03Data Mining – Association G Dong (WSU) 1 5. Association Rules Market Basket Analysis APRIORI Efficient Mining Post-processing.
CS 8751 ML & KDDSupport Vector Machines1 Mining Association Rules KDD from a DBMS point of view –The importance of efficiency Market basket analysis Association.
Association Rule.. Association rule mining  It is an important data mining model studied extensively by the database and data mining community.  Assume.
Data Mining Find information from data data ? information.
Association Rule Mining
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
Mining Frequent Patterns, Associations, and Correlations Compiled By: Umair Yaqub Lecturer Govt. Murray College Sialkot.
Data Mining  Association Rule  Classification  Clustering.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Chapter 8 Association Rules. Data Warehouse and Data Mining Chapter 10 2 Content Association rule mining Mining single-dimensional Boolean association.
Data Mining Association Rules Mining Frequent Itemset Mining Support and Confidence Apriori Approach.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Association Rule Mining CS 685: Special Topics in Data Mining Jinze Liu.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Association Rule Mining COMP Seminar BCB 713 Module Spring 2011.
Introduction to Machine Learning Lecture 13 Introduction to Association Rules Albert Orriols i Puig Artificial.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Chapter 2: Mining Association Rules
Data Mining Find information from data data ? information.
Data Mining Association Analysis: Basic Concepts and Algorithms
Association rule mining
Association Rules Repoussis Panagiotis.
Frequent Pattern Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
©Jiawei Han and Micheline Kamber
Chapter 2: Association Rules & Sequential Patterns
Association Rules & Sequential Patterns
Association Analysis: Basic Concepts
Presentation transcript:

Association Rules & Sequential Patterns

CS583, Bing Liu, UIC 2 Road map Basic concepts of Association Rules Apriori algorithm Sequential pattern mining Summary

CS583, Bing Liu, UIC 3 Association rule mining Proposed by Agrawal et al in It is an important data mining model studied extensively by the database and data mining community. Assume all data are categorical. No good algorithm for numeric data. Initially used for Market Basket Analysis to find how items purchased by customers are related. Bread  Milk [sup = 5%, conf = 100%]

CS583, Bing Liu, UIC 4 The model: data I = {i 1, i 2, …, i m }: a set of items. Transaction t :  t a set of items, and t  I. Transaction Database T: a set of transactions T = {t 1, t 2, …, t n }.

CS583, Bing Liu, UIC 5 Transaction data: supermarket data Market basket transactions: t1: {bread, cheese, milk} t2: {apple, eggs, salt, yogurt} … tn: {biscuit, eggs, milk} Concepts:  An item: an item/article in a basket  I: the set of all items sold in the store  A transaction: items purchased in a basket; it may have TID (transaction ID)  A transactional dataset: A set of transactions

CS583, Bing Liu, UIC 6 Transaction data: a set of documents A text document data set. Each document is treated as a “bag” of keywords doc1: Student, Teach, School doc2: Student, School doc3: Teach, School, City, Game doc4: Baseball, Basketball doc5: Basketball, Player, Spectator doc6: Baseball, Coach, Game, Team doc7: Basketball, Team, City, Game

CS583, Bing Liu, UIC 7 The model: rules A transaction t contains X, a set of items (itemset) in I, if X  t. An association rule is an implication of the form: X  Y, where X, Y  I, and X  Y =  An itemset is a set of items.  E.g., X = {milk, bread, cereal} is an itemset. A k-itemset is an itemset with k items.  E.g., {milk, bread, cereal} is a 3-itemset

CS583, Bing Liu, UIC 8 Rule strength measures Support: The rule holds with support sup in T (the transaction data set) if sup% of transactions contain X  Y.  sup = Pr(X  Y). Confidence: The rule holds in T with confidence conf if conf% of tranactions that contain X also contain Y.  conf = Pr(Y | X) An association rule is a pattern that states when X occurs, Y occurs with certain probability.

CS583, Bing Liu, UIC 9 Support and Confidence Support count: The support count of an itemset X, denoted by X.count, in a data set T is the number of transactions in T that contain X. Assume T has n transactions. Then,

CS583, Bing Liu, UIC 10 Goal and key features Goal: Find all rules that satisfy the user- specified minimum support (minsup) and minimum confidence (minconf). Key Features  Completeness: find all rules.  No target item(s) on the right-hand-side  Mining with data on hard disk (not in memory)

CS583, Bing Liu, UIC 11 An example Transaction data Assume: minsup = 30% minconf = 80% An example frequent itemset: {Chicken, Clothes, Milk} [sup = 3/7] Association rules from the itemset: Clothes  Milk, Chicken [sup = 3/7, conf = 3/3] …… Clothes, Chicken  Milk, [sup = 3/7, conf = 3/3] t1:Beef, Chicken, Milk t2:Beef, Cheese t3:Cheese, Boots t4:Beef, Chicken, Cheese t5:Beef, Chicken, Clothes, Cheese, Milk t6:Chicken, Clothes, Milk t7:Chicken, Milk, Clothes

CS583, Bing Liu, UIC 12 Transaction data representation A simplistic view of shopping baskets, Some important information not considered. E.g,  the quantity of each item purchased and  the price paid.

CS583, Bing Liu, UIC 13 Many mining algorithms There are a large number of them!! They use different strategies and data structures. Their resulting sets of rules are all the same.  Given a transaction data set T, and a minimum support and a minimum confident, the set of association rules existing in T is uniquely determined. Any algorithm should find the same set of rules although their computational efficiencies and memory requirements may be different. We study only one: the Apriori Algorithm

CS583, Bing Liu, UIC 14 Road map Basic concepts of Association Rules Apriori algorithm Mining with multiple minimum supports Mining class association rules Sequential pattern mining Summary

CS583, Bing Liu, UIC 15 The Apriori algorithm The best known algorithm Two steps:  Find all itemsets that have minimum support (frequent itemsets, also called large itemsets).  Use frequent itemsets to generate rules. E.g., a frequent itemset {Chicken, Clothes, Milk} [sup = 3/7] and one rule from the frequent itemset Clothes  Milk, Chicken [sup = 3/7, conf = 3/3]

CS583, Bing Liu, UIC 16 Step 1: Mining all frequent itemsets A frequent itemset is an itemset whose support is ≥ minsup. Key idea: The apriori property (downward closure property): any subsets of a frequent itemset are also frequent itemsets AB AC AD BC BD CD A B C D ABC ABD ACD BCD

CS583, Bing Liu, UIC 17 The Algorithm Iterative algo. (also called level-wise search): Find all 1-item frequent itemsets; then all 2-item frequent itemsets, and so on.  In each iteration k, only consider itemsets that contain some k-1 frequent itemset. Find frequent itemsets of size 1: F 1 From k = 2  C k = candidates of size k: those itemsets of size k that could be frequent, given F k-1  F k = those itemsets that are actually frequent, F k  C k (need to scan the database once).

CS583, Bing Liu, UIC 18 Example – Finding frequent itemsets Dataset T TIDItems T1001, 3, 4 T2002, 3, 5 T3001, 2, 3, 5 T4002, 5 itemset:count 1. scan T  C 1 : {1}:2, {2}:3, {3}:3, {4}:1, {5}:3  F 1 : {1}:2, {2}:3, {3}:3, {5}:3  C 2 : {1,2}, {1,3}, {1,5}, {2,3}, {2,5}, {3,5} 2. scan T  C 2 : { 1,2}:1, {1,3}:2, {1,5}:1, {2,3}:2, {2,5}:3, {3,5}:2  F 2 : { 1,3}:2, {2,3}:2, {2,5}:3, {3,5}:2  C 3 : {2, 3,5} 3. scan T  C 3 : {2, 3, 5}:2  F 3: {2, 3, 5} minsup=0.5

CS583, Bing Liu, UIC 19 Details: ordering of items The items in I are sorted in lexicographic order (which is a total order). The order is used throughout the algorithm in each itemset. {w[1], w[2], …, w[k]} represents a k-itemset w consisting of items w[1], w[2], …, w[k], where w[1] < w[2] < … < w[k] according to the total order.

CS583, Bing Liu, UIC 20 Details: the algorithm Algorithm Apriori(T) C 1  init-pass(T); F 1  {f | f  C 1, f.count/n  minsup}; // n: no. of transactions in T for (k = 2; F k-1   ; k++) do C k  candidate-gen(F k-1 ); for each transaction t  T do for each candidate c  C k do if c is contained in t then c.count++; end F k  {c  C k | c.count/n  minsup} end return F   k F k ;

CS583, Bing Liu, UIC 21 Apriori candidate generation The candidate-gen function takes F k-1 and returns a superset (called the candidates) of the set of all frequent k-itemsets. It has two steps  join step: Generate all possible candidate itemsets C k of length k  prune step: Remove those candidates in C k that cannot be frequent.

CS583, Bing Liu, UIC 22 Candidate-gen function Function candidate-gen(F k-1 ) C k   ; forall f 1, f 2  F k-1 with f 1 = {i 1, …, i k-2, i k-1 } and f 2 = {i 1, …, i k-2, i’ k-1 } and i k-1 < i’ k-1 do c  {i 1, …, i k-1, i’ k-1 }; // join f 1 and f 2 C k  C k  {c}; for each (k-1)-subset s of c do if (s  F k-1 ) then delete c from C k ;// prune end return C k ;

CS583, Bing Liu, UIC 23 An example F 3 = {{1, 2, 3}, {1, 2, 4}, {1, 3, 4}, {1, 3, 5}, {2, 3, 4}} After join  C 4 = {{1, 2, 3, 4}, {1, 3, 4, 5}} After pruning:  C 4 = {{1, 2, 3, 4}} because {1, 4, 5} is not in F 3 ({1, 3, 4, 5} is removed)

CS583, Bing Liu, UIC 24 Step 2: Generating rules from frequent itemsets Frequent itemsets  association rules One more step is needed to generate association rules For each frequent itemset X, For each proper nonempty subset A of X,  Let B = X - A  A  B is an association rule if Confidence(A  B) ≥ minconf, support(A  B) = support(A  B) = support(X) confidence(A  B) = support(A  B) / support(A)

CS583, Bing Liu, UIC 25 Generating rules: an example Suppose {2,3,4} is frequent, with sup=50%  Proper nonempty subsets: {2,3}, {2,4}, {3,4}, {2}, {3}, {4}, with sup=50%, 50%, 75%, 75%, 75%, 75% respectively  These generate these association rules: 2,3  4, confidence=100% 2,4  3, confidence=100% 3,4  2, confidence=67% 2  3,4, confidence=67% 3  2,4, confidence=67% 4  2,3, confidence=67% All rules have support = 50%

CS583, Bing Liu, UIC 26 Generating rules: summary To recap, in order to obtain A  B, we need to have support(A  B) and support(A) All the required information for confidence computation has already been recorded in itemset generation. No need to see the data T any more. This step is not as time-consuming as frequent itemsets generation.

CS583, Bing Liu, UIC 27 On Apriori Algorithm Seems to be very expensive Level-wise search K = the size of the largest itemset It makes at most K passes over data In practice, K is bounded (10). The algorithm is very fast. Under some conditions, all rules can be found in linear time. Scale up to large data sets

CS583, Bing Liu, UIC 28 More on association rule mining Clearly the space of all association rules is exponential, O(2 m ), where m is the number of items in I. The mining exploits sparseness of data, and high minimum support and high minimum confidence values. Still, it always produces a huge number of rules, thousands, tens of thousands, millions,...

CS583, Bing Liu, UIC 29 Road map Basic concepts of Association Rules Apriori algorithm Sequential pattern mining Summary

CS583, Bing Liu, UIC 30 Sequential pattern mining Association rule mining does not consider the order of transactions. In many applications such orderings are significant. E.g.,  in market basket analysis, it is interesting to know whether people buy some items in sequence, e.g., buying bed first and then bed sheets some time later.  In Web usage mining, it is useful to find navigational patterns of users in a Web site from sequences of page visits of users

CS583, Bing Liu, UIC 31 Basic concepts Let I = {i 1, i 2, …, i m } be a set of items. Sequence: An ordered list of itemsets. Itemset/element: A non-empty set of items X  I. We denote a sequence s by  a 1 a 2 …a r , where a i is an itemset, which is also called an element of s. An element (or an itemset) of a sequence is denoted by {x 1, x 2, …, x k }, where x j  I is an item. We assume without loss of generality that items in an element of a sequence are in lexicographic order.

CS583, Bing Liu, UIC 32 Basic concepts (contd) Size: The size of a sequence is the number of elements (or itemsets) in the sequence. Length: The length of a sequence is the number of items in the sequence.  A sequence of length k is called k-sequence. A sequence s 1 =  a 1 a 2 …a r  is a subsequence of another sequence s 2 =  b 1 b 2 …b v , or s 2 is a supersequence of s 1, if there exist integers 1 ≤ j 1 < j 2 < … < j r  1 < j r  v such that a 1  b j1, a 2  b j2, …, a r  b jr. We also say that s 2 contains s 1.

CS583, Bing Liu, UIC 33 An example Let I = {1, 2, 3, 4, 5, 6, 7, 8, 9}. Sequence  {3}{4, 5}{8}  is contained in (or is a subsequence of)  {6} {3, 7}{9}{4, 5, 8}{3, 8}   because {3}  {3, 7}, {4, 5}  {4, 5, 8}, and {8}  {3, 8}.  However,  {3}{8}  is not contained in  {3, 8}  or vice versa.  The size of the sequence  {3}{4, 5}{8}  is 3, and the length of the sequence is 4.

CS583, Bing Liu, UIC 34 Objective Given a set S of input data sequences (or sequence database), the problem of mining sequential patterns is to find all the sequences that have a user-specified minimum support. Each such sequence is called a frequent sequence, or a sequential pattern. The support for a sequence is the fraction of total data sequences in S that contains this sequence.

CS583, Bing Liu, UIC 35 Example

CS583, Bing Liu, UIC 36 Example (cond)

CS583, Bing Liu, UIC 37 GSP mining algorithm Very similar to the Apriori algorithm

CS583, Bing Liu, UIC 38 Candidate generation

CS583, Bing Liu, UIC 39 An example

CS583, Bing Liu, UIC 40 Summary Association rule mining has been extensively studied in the data mining community. So is sequential pattern mining There are many efficient algorithms and model variations. Other related work includes  Multi-level or generalized rule mining  Constrained rule mining  Incremental rule mining  Maximal frequent itemset mining  Closed itemset mining itemsets.htmlhttp:// itemsets.html  Rule interestingness and visualization  Parallel algorithms  …

41 Background on Association Rule An association rule X  Y satisfies with minimum confidence and support support, s = P(XUY), probability that a transaction contains {X U Y} confidence, c = P(Y|X), conditional probability that a transaction having X also contains Y Efficient algorithms Apriori by Agrawal & Srikant, VLDB94 FP-tree by Han, Pei & Yin, SIGMOD 2000 etc. Customer buys Y Customer buys both Customer buys X

42 Criticism to Support and Confidence Example 1: (Aggarwal & Yu, PODS98) Among 5000 students  3000 play basketball  3750 eat cereal  2000 both play basket ball and eat cereal play basketball  eat cereal [40%, 66.7%] is misleading because the overall percentage of students eating cereal is 75% which is higher than 66.7%. play basketball  not eat cereal [20%, 33.3%] is far more accurate, although with lower support and confidence

43 We need a measure of dependent or correlated events P(Y|X)/P(Y) is also called the lift of rule X => Y Criticism to Support and Confidence

44 Criticism to lift Suppose a triple ABC is unusually frequent because Case 1: AB and/or AC and/or BC are unusually frequent Case 2: there is something special about the triple that all three occur frequently. Example 2: ( DuMouchel & Pregibon, KDD 01 ) Suppose in a db of patient adverse drug reactions, A and B are two drugs, and C is the occurrence of kidney failure  Case 1: A and B may act independently upon the kidney, many occurrences of ABC is because A and B are sometimes prescribed together  Case 2: A and B may have no effect on the kidney if taken alone, but when taken together a drug interaction occurs that often leads to kidney failure  Case 3: A and B may have small effect on the kidney if taken alone, but when taken together, there is a strong effect.

45 Saturated log-linear model main effect1-factor effect 2-factor effect which shows the dependency within the distributions of A,B.

KDD’03 Washington, D.C.46 Interpreting associations Comparison with lift, EXCESS2 Derive association patterns by examining -terms E.g. we can derive positive interaction between AC, negative interaction between AC, no significant interaction between BC, and positive three-factor interaction among ABC Independence model pairwise model fitted model

Other measures 2 x 2 contingency table Objective measures for A=>B

Partial correlation The correlation between two variables after the common effects of the third variables are removed

Causal Interaction Learning Bayesian approaches (search and score), Friedman et al. 00 Apply heuristic searching methods to construct a model and then evaluate it using some scoring measure (e.g., bayesian scoring, entropy, MDL etc.) Averaging over the space of structures is computationally intractable as the number of DAGs is super-exponential in the number of genes Sensitive to the choice of local model Constraint-based conditional independence approaches, PC by Spirtes et al. 93 Instead of searching the space of models, it starts from the complete undirected graph, then thins this graph by removing edges with zero order conditional independence relations, thins again with first order conditional independence relations and so on so force. Slow when dealing with large amount of variables