Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee Intelligent Information Systems Lab.

Similar presentations


Presentation on theme: "Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee Intelligent Information Systems Lab."— Presentation transcript:

1 Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee ghlee@ssu.ac.kr Intelligent Information Systems Lab Soongsil University, Korea This material is modified and reproduced from books and materials written by P-N, Ran and et al, J. Han and M. Kamber, M. Dunham, etc

2 Ch 7. Extended Association Rules 2 Sequence of Transactions

3 Ch 7. Extended Association Rules 3 Why sequential pattern mining?

4 Ch 7. Extended Association Rules 4 Sequence Database

5 Ch 7. Extended Association Rules 5 Continuous and Categorical Attributes Example of Association Rule: {Number of Pages  [5,10)  (Browser=Mozilla)}  {Buy = No} [a,b) represents an interval that includes a but not b. How to apply association analysis formulation to non- asymmetric binary variables?

6 Ch 7. Extended Association Rules 6 Handling Categorical Attributes l Transform categorical attribute into asymmetric binary variables For example, see table 7.3, 7.4 in page 419 l Introduce a new “item” for each distinct attribute- value pair –Example: replace Browser Type attribute with  Browser Type = Internet Explorer  Browser Type = Mozilla

7 Ch 7. Extended Association Rules 7

8 Ch 7. Extended Association Rules 8 After binarizing categorical and symmetric binary attributes

9 Ch 7. Extended Association Rules 9 Internet survey data with continuous attributes

10 Ch 7. Extended Association Rules 10 Internet survey data after binarizing categorical and continuous attributes

11 Ch 7. Extended Association Rules 11 Handling Categorical Attributes l Potential Issues –What if attribute has many possible values  Example: attribute country has more than 200 possible values  Many of the attribute values may have very low support –Potential solution: Aggregate the low-support attribute values –What if distribution of attribute values is highly skewed  Example: 95% of the visitors have Buy = No  Most of the items will be associated with (Buy=No) item –Potential solution: drop the highly frequent items

12 Ch 7. Extended Association Rules 12 Handling Continuous Attributes l Different kinds of rules: –Age  [21,35)  Salary  [70k,120k)  Buy –Salary  [70k,120k)  Buy  Age:  =28,  =4 l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

13 Ch 7. Extended Association Rules 13 Handling Continuous Attributes l Use discretization l Unsupervised: –Equal-width binning –Equal-depth binning –Clustering l Supervised: Classv1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 v8v8 v9v9 Anomalous002010200000 Normal150100000 150100 bin 1 bin 3 bin 2 Attribute values, v of Yearly Bargain sale

14 Ch 7. Extended Association Rules 14 Discretization Issues l Size of the discretized intervals affect support & confidence –If intervals too small  may not have enough support –If intervals too large  may not have enough confidence l Potential solution: use all possible intervals {Refund = No, (Income = $51,250)}  {Cheat = No} {Refund = No, (60K  Income  80K)}  {Cheat = No} {Refund = No, (0K  Income  1B)}  {Cheat = No}

15 Ch 7. Extended Association Rules 15 Discretization Issues l Execution time –If intervals contain n values, there are on average O(n 2 ) possible ranges l Too many rules {Refund = No, (Income = $51,250)}  {Cheat = No} {Refund = No, (51K  Income  52K)}  {Cheat = No} {Refund = No, (50K  Income  60K)}  {Cheat = No}

16 Ch 7. Extended Association Rules 16 Approach by Srikant & Agrawal l Preprocess the data –Discretize attribute using equi-depth partitioning  Use partial completeness measure to determine number of partitions  Merge adjacent intervals as long as support is less than max-support l Apply existing association rule mining algorithms l Determine interesting rules in the output

17 Ch 7. Extended Association Rules 17 Handling methods of Continuous Attributes l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

18 Ch 7. Extended Association Rules 18 Statistics-based Methods l Example: Browser=Mozilla  Buy=Yes  Age:  =23 l Rule consequent consists of a continuous variable, characterized by their statistics –mean, median, standard deviation, etc. l Approach: –Withhold the target variable from the rest of the data –Apply existing frequent itemset generation on the rest of the data –For each frequent itemset, compute the descriptive statistics for the corresponding target variable  Frequent itemset becomes a rule by introducing the target variable as rule consequent –Apply statistical test to determine interestingness of the rule

19 Ch 7. Extended Association Rules 19 Statistics-based Methods l How to determine whether an association rule interesting? –Compare the statistics for segment of population covered by the rule vs segment of population not covered by the rule: A  B:  versus A  B:  ’ –Statistical hypothesis testing:  Null hypothesis: H0:  ’ =  +   Alternative hypothesis: H1:  ’ >  +   Z has zero mean and variance 1 under null hypothesis n1: # of transactions supporting A n2: # of transactions not supporting A s1: SD for B value among transaction supporting A s2: SD for B value among transaction not supporting A

20 Ch 7. Extended Association Rules 20 Statistics-based Methods l Example: r: Browser=Mozilla  Buy=Yes  Age:  =38 –Rule is interesting if difference between  and  ’ is greater than 5 years (i.e.,  = 5) –For r, suppose n1 = 50, s1 = 3.5 –For r’ (complement): n2 = 200, s2 = 6.5 –For 1-sided test at 95% confidence level, critical Z-value for rejecting null hypothesis is 1.64. –Since Z is greater than 1.64, r is an interesting rule

21 Ch 7. Extended Association Rules 21 Handling methods of Continuous Attributes l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

22 Ch 7. Extended Association Rules 22 Min-Apriori (Han et al) Example: W1 and W2 tends to appear together in the same document Document-term matrix:

23 Ch 7. Extended Association Rules 23 Min-Apriori l Data contains only continuous attributes of the same “type” –e.g., frequency of words in a document l Potential solution: –Convert into 0/1 matrix and then apply existing algorithms  lose word frequency information –Discretization does not apply as users want association among words not ranges of words

24 Ch 7. Extended Association Rules 24 Min-Apriori l How to determine the support of a word? –If we simply sum up its frequency, support count will be greater than total number of documents!  Normalize the word vectors – e.g., using L 1 norm  Each word has a support equals to 1.0 Normalize

25 Ch 7. Extended Association Rules 25 Min-Apriori l New definition of support: Example: Sup(W1,W2,W3) = 0 + 0 + 0 + 0 + 0.17 = 0.17

26 Ch 7. Extended Association Rules 26 Anti-monotone property of Support Example: Sup(W1) = 0.4 + 0 + 0.4 + 0 + 0.2 = 1 Sup(W1, W2) = 0.33 + 0 + 0.4 + 0 + 0.17 = 0.9 Sup(W1, W2, W3) = 0 + 0 + 0 + 0 + 0.17 = 0.17

27 Ch 7. Extended Association Rules 27 Anti-monotone property of Support l Apriori principle: –If an itemset is frequent, then all of its subsets must also be frequent l Apriori principle holds due to the following property of the support measure: –Support of an itemset never exceeds the support of its subsets –This is known as the anti-monotone property of support

28 Ch 7. Extended Association Rules 28 Anti-monotone property of Support l The desired properties –Support increases monotonically as the normalized frequency of a word increases. –Support increases monotonically as the No. of documents that contain the word increases. –Support has an anti-monotone property. –Eg) since min({A,B}) >= min({A, B, C}), s({A,B}) >= s({A, B, C}), Support decreases monotonically as the No. of words in an itemset increases !!

29 Ch 7. Extended Association Rules 29

30 Ch 7. Extended Association Rules 30 Multi-level Association Rules

31 Ch 7. Extended Association Rules 31 Multi-level Association Rules l Why should we incorporate concept hierarchy? –Rules at lower levels may not have enough support to appear in any frequent itemsets –Rules at lower levels of the hierarchy are overly specific  e.g., skim milk  white bread, 2% milk  wheat bread, foremost milk  wheat bread, etc. are indicative of association between milk and bread

32 Ch 7. Extended Association Rules 32 Multi-level Association Rules l How do support and confidence vary as we traverse the concept hierarchy? –If X is the parent item for both X1 and X2, then s(X) ≤ s(X1) + s(X2) –If s(X1  Y1) ≥ minsup, and X is parent of X1, Y is parent of Y1 thens(X  Y1) ≥ minsup, s(X1  Y) ≥ minsup s(X  Y) ≥ minsup –If c(X1  Y1) ≥ minconf, thenc(X1  Y) ≥ minconf X X1 X2 Y Y1 Y2

33 Ch 7. Extended Association Rules 33 Multi-level Association Rules l Approach 1: –Extend current association rule formulation by augmenting each transaction with higher level items Original Transaction: {skim milk, wheat bread} Augmented Transaction: {skim milk, wheat bread, milk, bread, food} l Issues: –Items that reside at higher levels have much higher support counts ( 상위레벨의 아이템은 훨씬 더 높은 support counts )  if support threshold is low, too many frequent patterns involving items from the higher levels –Increased dimensionality of the data –Computation time of wider transactions –Redundant rules

34 Ch 7. Extended Association Rules 34 Multi-level Association Rules l Approach 2: –Generate frequent patterns at highest level first –Then, generate frequent patterns at the next highest level, and so on l Issues: –I/O requirements will increase dramatically because we need to perform more passes over the data –May miss some potentially interesting cross-level association patterns

35 Ch 7. Extended Association Rules 35 Sequential Patterns

36 Ch 7. Extended Association Rules 36 Sequence Data ObjectTimestampEvents A102, 3, 5 A206, 1 A231 B114, 5, 6 B172 B217, 8, 1, 2 B281, 6 C141, 8, 7 Sequence Database:

37 Ch 7. Extended Association Rules 37 Examples of Sequence Data Sequence Database SequenceElement (Transaction) Event (Item) CustomerPurchase history of a given customer A set of items bought by a customer at time t Books, diary products, CDs, etc Web DataBrowsing activity of a particular Web visitor A collection of files viewed by a Web visitor after a single mouse click Home page, index page, contact info, etc Event dataHistory of events generated by a given sensor Events triggered by a sensor at time t Types of alarms generated by sensors Genome sequences DNA sequence of a particular species An element of the DNA sequence Bases A,T,G,C Sequence E1 E2 E1 E3 E2 E3 E4 E2 Element (Transaction) Event (Item)

38 Ch 7. Extended Association Rules 38 Formal Definition of a Sequence l A sequence is an ordered list of elements (transactions) s = –Each element contains a collection of events (items) e i = {i 1, i 2, …, i k } –Each element is attributed to a specific time or location l Length of a sequence, |s|, is given by the number of elements of the sequence l A k-sequence is a sequence that contains k events (items)

39 Ch 7. Extended Association Rules 39 Examples of Sequence l Web sequence: l Sequence of initiating events causing the nuclear accident at 3-mile Island: (http://stellar-one.com/nuclear/staff_reports/summary_SOE_the_initiating_event.htm) l Sequence of books checked out at a library:

40 Ch 7. Extended Association Rules 40 Formal Definition of a Subsequence l A sequence is contained in another sequence (m ≥ n) if there exist integers i 1 < i 2 < … < i n such that a 1  b i1, a 2  b i2, …, a n  b in l The support of a subsequence w is defined as the fraction of data sequences that contain w l A sequential pattern is a frequent subsequence (i.e., a subsequence whose support is ≥ minsup) Data sequenceSubsequenceContain? Yes No Yes

41 Ch 7. Extended Association Rules 41 Counting Sequences (An example)

42 Ch 7. Extended Association Rules 42 Sequential Pattern Mining: Definition l Given: –a database of sequences –a user-specified minimum support threshold, minsup l Task: –Find all subsequences with support ≥ minsup

43 Ch 7. Extended Association Rules 43 Sequential Pattern Mining: Challenge l Given a sequence: –Examples of subsequences:,,, etc. l How many k-subsequences can be extracted from a given n-sequence? n = 9 k=4: Y _ _ Y Y _ _ _ Y

44 Ch 7. Extended Association Rules 44 Sequential Pattern Mining: Example Minsup = 50% Examples of Frequent Subsequences: 3/5 ☞ s=60% s=60% s=80% 4/5 ☞ s=80% s=80% s=60%

45 Ch 7. Extended Association Rules 45 Extracting Sequential Patterns l Given n events: i 1, i 2, i 3, …, i n l Candidate 1-subsequences:,,, …, l Candidate 2-subsequences:,, …,,, …, l Candidate 3-subsequences:,, …,,, …,,, …,,, …

46 Ch 7. Extended Association Rules 46

47 Ch 7. Extended Association Rules 47 Lecture Outline

48 Ch 7. Extended Association Rules 48 AprioriAll: The idea l Basic method to mine sequential patterns – Based on the Apriori algorithm – Count all the large sequences, including nonmaximal sequences – Use Apriori-generate function to generate candidate sequences: get candidates for pass using only the large (frequent) sequences found in the previous pass and make a pass over the data to find their support

49 Ch 7. Extended Association Rules 49 AprioriAll: The big picture l Five-phase algorithm 1. Sort phase: Create the sequence database from transactions. 2. Large itemset phase Find all frequent itemsets using Apriori 3. Transformation phase: Do integer mapping for large itemsets 4. Sequence phase: Find all frequent sequential patterns using Apriori. 5. Maximal phase: Eliminate non maximal sequences.

50 Ch 7. Extended Association Rules 50 AprioriAll Algorithm(1)

51 Ch 7. Extended Association Rules 51 1. AprioriAll: Sequence Database Example Frequent sequence pattern ?

52 Ch 7. Extended Association Rules 52 2. AprioriAll: Sort Phase Example

53 Ch 7. Extended Association Rules 53 3. AprioriAll: Large itemset Phase

54 Ch 7. Extended Association Rules 54 4. AprioriAll: Transformation Phase

55 Ch 7. Extended Association Rules 55 5. AprioriAll: Sequence Phase

56 Ch 7. Extended Association Rules 56 6. AprioriAll: Maximal phase

57 Ch 7. Extended Association Rules 57 Summary for AprioriAll l Algorithm wastes much time in counting nonmaximal sequences, which can not be sequential patterns l There are other variations of AprioriAll that reduce the candidates that are not maximals: AprioriSome and DynamicSome l Absence of time window constraints l AprioriALL is the basis of many efficient algorithm developed later. GSP is among them.

58 Ch 7. Extended Association Rules 58 Generalized Sequential Pattern (GSP) l Step 1: –Make the first pass over the sequence database D to yield all the 1- element frequent sequences l Step 2: Repeat until no new frequent sequences are found –Candidate Generation:  Merge pairs of frequent subsequences found in the (k-1)th pass to generate candidate sequences that contain k items –Candidate Pruning:  Prune candidate k-sequences that contain infrequent (k-1)-subsequences –Support Counting:  Make a new pass over the sequence database D to find the support for these candidate sequences –Candidate Elimination:  Eliminate candidate k-sequences whose actual support is less than minsup

59 Ch 7. Extended Association Rules 59 Candidate Generation l Base case (k=2): –Merging two frequent 1-sequences and will produce two candidate 2-sequences: and l General case (k>2): –A frequent (k-1)-sequence w 1 is merged with another frequent (k-1)-sequence w 2 to produce a candidate k-sequence if the subsequence obtained by removing the first event in w 1 is the same as the subsequence obtained by removing the last event in w 2  The resulting candidate after merging is given by the sequence w 1 extended with the last event of w 2. –If the last two events in w 2 belong to the same element, then the last event in w 2 becomes part of the last element in w 1 –Otherwise, the last event in w 2 becomes a separate element appended to the end of w 1

60 Ch 7. Extended Association Rules 60 Candidate Generation Examples l Merging the sequences w 1 = and w 2 = will produce the candidate sequence because the last two events in w 2 (4 and 5) belong to the same element l Merging the sequences w 1 = and w 2 = will produce the candidate sequence because the last two events in w 2 (4 and 5) do not belong to the same element l We do not have to merge the sequences w 1 = and w 2 = to produce the candidate because if the latter is a viable candidate, then it can be obtained by merging w 2 with + =>

61 Ch 7. Extended Association Rules 61 Generalized Sequential Pattern (GSP) Example Prune candidate k-sequences that contain infrequent (k-1)-subsequences

62 Ch 7. Extended Association Rules 62 62 The GSP Algorithm l Benefits from the Apriori pruning –Reduces search space l Bottlenecks –Scans the database multiple times –Generates a huge set of candidate sequences There is a need for more efficient mining methods

63 Ch 7. Extended Association Rules 63 63 The SPADE Algorithm l SPADE (Sequential PAttern Discovery using Equivalent Class) developed by Zaki 2001 l A vertical format sequential pattern mining method l A sequence database is mapped to a large set of Item: l Sequential pattern mining is performed by –growing the subsequences (patterns) one item at a time by Apriori candidate generation

64 Ch 7. Extended Association Rules 64 64 The SPADE Algorithm

65 Ch 7. Extended Association Rules 65 Timing Constraints (I) {A B} {C} {D E} <= m s <= x g >n g x g : max-gap n g : min-gap m s : maximum span Data sequenceSubsequence maxgap = P, mingap = P maxgap = P, mingap = F maxgap = F, mingap = P maxgap = F, mingap = F maxgap = 3, mingap = 1 Window size U(s j+1 ) l(s j ) = U(s j+1 )-l(s j ) = l(s j+1 )-u(s j ) u(s j ) l(s j+1 ) U(s n ) l(s 1 )

66 Ch 7. Extended Association Rules 66 Mining Sequential Patterns with Timing Constraints l Approach 1: –Mine sequential patterns without timing constraints –Postprocess the discovered patterns l Approach 2: –Modify GSP to directly prune candidates that violate timing constraints –Question:  Does Apriori principle still hold?

67 Ch 7. Extended Association Rules 67 Apriori Principle for Sequence Data Suppose: x g = 1 (max-gap) n g = 0 (min-gap) m s = 5 (maximum span) minsup = 60% Problem exists because of max-gap constraint No such problem if max-gap is infinite

68 Ch 7. Extended Association Rules 68 123 Timeline Object A: Object B: Object C: Object D: Object E: 124124 2323 5 1212 234234 234234 235235 1212 2 34 34 45 45 1313 245245 Suppose: xg = 1 (max-gap) ng = 0 (min-gap) ms = 5 (maximum span) minsup = 60% support = 40% But support = 60%

69 Ch 7. Extended Association Rules 69 Contiguous Subsequences( 인접하는 서브패턴 ) l s is a contiguous subsequence of w = … if any of the following conditions hold: 1.s is obtained from w by deleting an item from either e 1 or e k 2.s is obtained from w by deleting an item from any element e i that contains more than 2 items 3.s is a contiguous subsequence of s’ and s’ is a contiguous subsequence of w (recursive definition) l Examples: s = –is a contiguous subsequence of –is not a contiguous subsequence of

70 Ch 7. Extended Association Rules 70 Modified Candidate Pruning Step l Without maxgap constraint: –A candidate k-sequence is pruned if at least one of its (k-1)-subsequences is infrequent l With maxgap constraint: –A candidate k-sequence is pruned if at least one of its contiguous (k-1)-subsequences is infrequent

71 Ch 7. Extended Association Rules 71 Timing Constraints (II) {A B} {C} {D E} <= m s <= x g >n g <= ws x g : max-gap n g : min-gap ws: window size m s : maximum span Data sequenceSubsequenceContain? No Yes Yes x g = 2, n g = 0, ws = 1, m s = 5

72 Ch 7. Extended Association Rules 72 Modified Support Counting Step l Given a candidate pattern: –Any data sequences that contain, ( where time({c}) – time({a}) ≤ ws) (where time({a}) – time({c}) ≤ ws) will contribute to the support count of candidate pattern

73 Ch 7. Extended Association Rules 73 Other Formulation l In some domains, we may have only one very long time series –Example:  monitoring network traffic events for attacks  monitoring telecommunication alarm signals l Goal is to find frequent sequences of events in the time series –This problem is also known as frequent episode mining E1 E2 E1 E2 E1 E2 E3 E4 E3 E4 E1 E2 E2 E4 E3 E5 E2 E3 E5 E1 E2 E3 E1 Pattern:

74 General Support Counting Schemes Assume: x g = 2 (max-gap) n g = 0 (min-gap) ws = 0 (window size) m s = 2 (maximum span)

75 Ch 7. Extended Association Rules 75 Frequent Subgraph Mining l Extend association rule mining to finding frequent subgraphs l Useful for Web Mining, computational chemistry, bioinformatics, spatial data sets, etc

76 Ch 7. Extended Association Rules 76 Graph Definitions

77 Ch 7. Extended Association Rules 77 Representing Transactions as Graphs l Each transaction is a clique of items

78 Ch 7. Extended Association Rules 78 Representing Graphs as Transactions

79 Ch 7. Extended Association Rules 79 Challenges l Node may contain duplicate labels l Support and confidence –How to define them? l Additional constraints imposed by pattern structure –Support and confidence are not the only constraints –Assumption: frequent subgraphs must be connected l Apriori-like approach: –Use frequent k-subgraphs to generate frequent (k+1) subgraphs  What is k?

80 Ch 7. Extended Association Rules 80 Challenges… l Support: –number of graphs that contain a particular subgraph l Apriori principle still holds l Level-wise (Apriori-like) approach: –Vertex growing:  k is the number of vertices –Edge growing:  k is the number of edges

81 Ch 7. Extended Association Rules 81 Vertex Growing

82 Ch 7. Extended Association Rules 82 Edge Growing

83 Ch 7. Extended Association Rules 83 Apriori-like Algorithm l Find frequent 1-subgraphs l Repeat –Candidate generation  Use frequent (k-1)-subgraphs to generate candidate k-subgraph –Candidate pruning  Prune candidate subgraphs that contain infrequent (k-1)-subgraphs –Support counting  Count the support of each remaining candidate –Eliminate candidate k-subgraphs that are infrequent In practice, it is not as easy. There are many other issues

84 Ch 7. Extended Association Rules 84 Example: Dataset

85 Ch 7. Extended Association Rules 85 Example

86 Ch 7. Extended Association Rules 86 Candidate Generation l In Apriori: –Merging two frequent k-itemsets will produce a candidate (k+1)-itemset l In frequent subgraph mining (vertex/edge growing) –Merging two frequent k-subgraphs may produce more than one candidate (k+1)-subgraph

87 Ch 7. Extended Association Rules 87 Multiplicity of Candidates (Vertex Growing)

88 Ch 7. Extended Association Rules 88 Multiplicity of Candidates (Edge growing) l Case 1: identical vertex labels

89 Ch 7. Extended Association Rules 89 Multiplicity of Candidates (Edge growing) l Case 2: Core contains identical labels Core: The (k-1) subgraph that is common between the joint graphs

90 Ch 7. Extended Association Rules 90 Multiplicity of Candidates (Edge growing) l Case 3: Core multiplicity

91 Ch 7. Extended Association Rules 91 Adjacency Matrix Representation The same graph can be represented in many ways

92 Ch 7. Extended Association Rules 92 Graph Isomorphism l A graph is isomorphic if it is topologically equivalent to another graph

93 Ch 7. Extended Association Rules 93 Graph Isomorphism l Test for graph isomorphism is needed: –During candidate generation step, to determine whether a candidate has been generated –During candidate pruning step, to check whether its (k-1)-subgraphs are frequent –During candidate counting, to check whether a candidate is contained within another graph

94 Ch 7. Extended Association Rules 94 Graph Isomorphism l Use canonical labeling to handle isomorphism –Map each graph into an ordered string representation (known as its code) such that two isomorphic graphs will be mapped to the same canonical encoding –Example:  Lexicographically largest adjacency matrix String: 0010001111010110 Canonical: 0111101011001000


Download ppt "Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee Intelligent Information Systems Lab."

Similar presentations


Ads by Google