Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee Intelligent Information Systems Lab.

Slides:



Advertisements
Similar presentations
Mining Sequence Data. © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Sequence Data ObjectTimestampEvents A102, 3, 5 A206, 1 A231 B114,
Advertisements

Association Analysis (Data Engineering). Type of attributes in assoc. analysis Association rule mining assumes the input data consists of binary attributes.
Data Mining (Apriori Algorithm)DCS 802, Spring DCS 802 Data Mining Apriori Algorithm Spring of 2002 Prof. Sung-Hyuk Cha School of Computer Science.
gSpan: Graph-based substructure pattern mining
Association Rule Mining. 2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any.
Mining Graphs.
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Generalized Sequential Pattern (GSP) Step 1: – Make the first pass over the sequence database D to yield all the 1-element frequent sequences Step 2: Repeat.
Association Analysis (4) (Evaluation). Evaluation of Association Patterns Association analysis algorithms have the potential to generate a large number.
Association Analysis (7) (Mining Graphs)
Association Analysis (2). Example TIDList of item ID’s T1I1, I2, I5 T2I2, I4 T3I2, I3 T4I1, I2, I4 T5I1, I3 T6I2, I3 T7I1, I3 T8I1, I2, I3, I5 T9I1, I2,
Data Mining Association Analysis: Basic Concepts and Algorithms
4/3/01CS632 - Data Mining1 Data Mining Presented By: Kevin Seng.
Association Analysis: Basic Concepts and Algorithms.
1 Mining Quantitative Association Rules in Large Relational Database Presented by Jin Jin April 1, 2004.
Data Mining Association Analysis: Basic Concepts and Algorithms
© Vipin Kumar CSci 8980 Fall CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance Computing Research Center Department of Computer.
Fast Algorithms for Association Rule Mining
Data Mining Association Rules: Advanced Concepts and Algorithms
Mining Sequences. Examples of Sequence Web sequence:  {Homepage} {Electronics} {Digital Cameras} {Canon Digital Camera} {Shopping Cart} {Order Confirmation}
Data Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Mining Sequential Patterns: Generalizations and Performance Improvements R. Srikant R. Agrawal IBM Almaden Research Center Advisor: Dr. Hsu Presented by:
Data Mining Association Rules: Advanced Concepts and Algorithms
What Is Sequential Pattern Mining?
Advanced Association Rule Mining and Beyond. Continuous and Categorical Attributes Example of Association Rule: {Number of Pages  [5,10)  (Browser=Mozilla)}
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining By Tan, Steinbach, Kumar Lecture.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Efficient Data Mining for Calling Path Patterns in GSM Networks Information Systems, accepted 5 December 2002 SPEAKER: YAO-TE WANG ( 王耀德 )
Mining Sequential Patterns Rakesh Agrawal Ramakrishnan Srikant Proc. of the Int ’ l Conference on Data Engineering (ICDE) March 1995 Presenter: Sam Brown.
Modul 8: Sequential Pattern Mining. Terminology  Item  Itemset  Sequence (Customer-sequence)  Subsequence  Support for a sequence  Large/frequent.
Data Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Data Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Analysis Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/
Data Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Modul 8: Sequential Pattern Mining
Data & Text Mining1 Introduction to Association Analysis Zhangxi Lin ISQS 3358 Texas Tech University.
Sequential Pattern Mining
1 Frequent Subgraph Mining Jianlin Feng School of Software SUN YAT-SEN UNIVERSITY June 12, 2010.
Frequent Item Mining. What is data mining? =Pattern Mining? What patterns? Why are they useful?
Data Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes Introduction to Data Mining by Tan, Steinbach, Kumar.
Data Mining Association Rules: Advanced Concepts and Algorithms
1 1 MSCIT 5210: Knowledge Discovery and Data Mining Acknowledgement: Slides modified by Dr. Lei Chen based on the slides provided by Tan, Steinbach, Kumar.
© Tan,Steinbach, Kumar Introduction to Data Mining 4/18/ Data Mining: Association Analysis This lecture node is modified based on Lecture Notes for.
CMU SCS : Multimedia Databases and Data Mining Lecture #30: Data Mining - assoc. rules C. Faloutsos.
Mining Sequential Patterns © Tan,Steinbach, Kumar Introduction to Data Mining 4/18/2004 Slides are adapted from Introduction to Data Mining by Tan, Steinbach,
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Chapter 3 Data Mining: Classification & Association Chapter 4 in the text box Section: 4.3 (4.3.1),
Introduction to Data Mining Mining Association Rules Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
Gspan: Graph-based Substructure Pattern Mining
DATA MINING: ASSOCIATION ANALYSIS (2) Instructor: Dr. Chun Yu School of Statistics Jiangxi University of Finance and Economics Fall 2015.
Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Frequent Pattern Mining
Association Analysis: Advance Concepts
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Rules: Advanced Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Mining Sequential Patterns
Presentation transcript:

Mining Association Rules: Advanced Concepts and Algorithms Lecture Notes for Chapter 7 By Gun Ho Lee Intelligent Information Systems Lab Soongsil University, Korea This material is modified and reproduced from books and materials written by P-N, Ran and et al, J. Han and M. Kamber, M. Dunham, etc

Ch 7. Extended Association Rules 2 Sequence of Transactions

Ch 7. Extended Association Rules 3 Why sequential pattern mining?

Ch 7. Extended Association Rules 4 Sequence Database

Ch 7. Extended Association Rules 5 Continuous and Categorical Attributes Example of Association Rule: {Number of Pages  [5,10)  (Browser=Mozilla)}  {Buy = No} [a,b) represents an interval that includes a but not b. How to apply association analysis formulation to non- asymmetric binary variables?

Ch 7. Extended Association Rules 6 Handling Categorical Attributes l Transform categorical attribute into asymmetric binary variables For example, see table 7.3, 7.4 in page 419 l Introduce a new “item” for each distinct attribute- value pair –Example: replace Browser Type attribute with  Browser Type = Internet Explorer  Browser Type = Mozilla

Ch 7. Extended Association Rules 7

Ch 7. Extended Association Rules 8 After binarizing categorical and symmetric binary attributes

Ch 7. Extended Association Rules 9 Internet survey data with continuous attributes

Ch 7. Extended Association Rules 10 Internet survey data after binarizing categorical and continuous attributes

Ch 7. Extended Association Rules 11 Handling Categorical Attributes l Potential Issues –What if attribute has many possible values  Example: attribute country has more than 200 possible values  Many of the attribute values may have very low support –Potential solution: Aggregate the low-support attribute values –What if distribution of attribute values is highly skewed  Example: 95% of the visitors have Buy = No  Most of the items will be associated with (Buy=No) item –Potential solution: drop the highly frequent items

Ch 7. Extended Association Rules 12 Handling Continuous Attributes l Different kinds of rules: –Age  [21,35)  Salary  [70k,120k)  Buy –Salary  [70k,120k)  Buy  Age:  =28,  =4 l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

Ch 7. Extended Association Rules 13 Handling Continuous Attributes l Use discretization l Unsupervised: –Equal-width binning –Equal-depth binning –Clustering l Supervised: Classv1v1 v2v2 v3v3 v4v4 v5v5 v6v6 v7v7 v8v8 v9v9 Anomalous Normal bin 1 bin 3 bin 2 Attribute values, v of Yearly Bargain sale

Ch 7. Extended Association Rules 14 Discretization Issues l Size of the discretized intervals affect support & confidence –If intervals too small  may not have enough support –If intervals too large  may not have enough confidence l Potential solution: use all possible intervals {Refund = No, (Income = $51,250)}  {Cheat = No} {Refund = No, (60K  Income  80K)}  {Cheat = No} {Refund = No, (0K  Income  1B)}  {Cheat = No}

Ch 7. Extended Association Rules 15 Discretization Issues l Execution time –If intervals contain n values, there are on average O(n 2 ) possible ranges l Too many rules {Refund = No, (Income = $51,250)}  {Cheat = No} {Refund = No, (51K  Income  52K)}  {Cheat = No} {Refund = No, (50K  Income  60K)}  {Cheat = No}

Ch 7. Extended Association Rules 16 Approach by Srikant & Agrawal l Preprocess the data –Discretize attribute using equi-depth partitioning  Use partial completeness measure to determine number of partitions  Merge adjacent intervals as long as support is less than max-support l Apply existing association rule mining algorithms l Determine interesting rules in the output

Ch 7. Extended Association Rules 17 Handling methods of Continuous Attributes l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

Ch 7. Extended Association Rules 18 Statistics-based Methods l Example: Browser=Mozilla  Buy=Yes  Age:  =23 l Rule consequent consists of a continuous variable, characterized by their statistics –mean, median, standard deviation, etc. l Approach: –Withhold the target variable from the rest of the data –Apply existing frequent itemset generation on the rest of the data –For each frequent itemset, compute the descriptive statistics for the corresponding target variable  Frequent itemset becomes a rule by introducing the target variable as rule consequent –Apply statistical test to determine interestingness of the rule

Ch 7. Extended Association Rules 19 Statistics-based Methods l How to determine whether an association rule interesting? –Compare the statistics for segment of population covered by the rule vs segment of population not covered by the rule: A  B:  versus A  B:  ’ –Statistical hypothesis testing:  Null hypothesis: H0:  ’ =  +   Alternative hypothesis: H1:  ’ >  +   Z has zero mean and variance 1 under null hypothesis n1: # of transactions supporting A n2: # of transactions not supporting A s1: SD for B value among transaction supporting A s2: SD for B value among transaction not supporting A

Ch 7. Extended Association Rules 20 Statistics-based Methods l Example: r: Browser=Mozilla  Buy=Yes  Age:  =38 –Rule is interesting if difference between  and  ’ is greater than 5 years (i.e.,  = 5) –For r, suppose n1 = 50, s1 = 3.5 –For r’ (complement): n2 = 200, s2 = 6.5 –For 1-sided test at 95% confidence level, critical Z-value for rejecting null hypothesis is –Since Z is greater than 1.64, r is an interesting rule

Ch 7. Extended Association Rules 21 Handling methods of Continuous Attributes l Different methods: –Discretization-based –Statistics-based –Non-discretization based  minApriori

Ch 7. Extended Association Rules 22 Min-Apriori (Han et al) Example: W1 and W2 tends to appear together in the same document Document-term matrix:

Ch 7. Extended Association Rules 23 Min-Apriori l Data contains only continuous attributes of the same “type” –e.g., frequency of words in a document l Potential solution: –Convert into 0/1 matrix and then apply existing algorithms  lose word frequency information –Discretization does not apply as users want association among words not ranges of words

Ch 7. Extended Association Rules 24 Min-Apriori l How to determine the support of a word? –If we simply sum up its frequency, support count will be greater than total number of documents!  Normalize the word vectors – e.g., using L 1 norm  Each word has a support equals to 1.0 Normalize

Ch 7. Extended Association Rules 25 Min-Apriori l New definition of support: Example: Sup(W1,W2,W3) = = 0.17

Ch 7. Extended Association Rules 26 Anti-monotone property of Support Example: Sup(W1) = = 1 Sup(W1, W2) = = 0.9 Sup(W1, W2, W3) = = 0.17

Ch 7. Extended Association Rules 27 Anti-monotone property of Support l Apriori principle: –If an itemset is frequent, then all of its subsets must also be frequent l Apriori principle holds due to the following property of the support measure: –Support of an itemset never exceeds the support of its subsets –This is known as the anti-monotone property of support

Ch 7. Extended Association Rules 28 Anti-monotone property of Support l The desired properties –Support increases monotonically as the normalized frequency of a word increases. –Support increases monotonically as the No. of documents that contain the word increases. –Support has an anti-monotone property. –Eg) since min({A,B}) >= min({A, B, C}), s({A,B}) >= s({A, B, C}), Support decreases monotonically as the No. of words in an itemset increases !!

Ch 7. Extended Association Rules 29

Ch 7. Extended Association Rules 30 Multi-level Association Rules

Ch 7. Extended Association Rules 31 Multi-level Association Rules l Why should we incorporate concept hierarchy? –Rules at lower levels may not have enough support to appear in any frequent itemsets –Rules at lower levels of the hierarchy are overly specific  e.g., skim milk  white bread, 2% milk  wheat bread, foremost milk  wheat bread, etc. are indicative of association between milk and bread

Ch 7. Extended Association Rules 32 Multi-level Association Rules l How do support and confidence vary as we traverse the concept hierarchy? –If X is the parent item for both X1 and X2, then s(X) ≤ s(X1) + s(X2) –If s(X1  Y1) ≥ minsup, and X is parent of X1, Y is parent of Y1 thens(X  Y1) ≥ minsup, s(X1  Y) ≥ minsup s(X  Y) ≥ minsup –If c(X1  Y1) ≥ minconf, thenc(X1  Y) ≥ minconf X X1 X2 Y Y1 Y2

Ch 7. Extended Association Rules 33 Multi-level Association Rules l Approach 1: –Extend current association rule formulation by augmenting each transaction with higher level items Original Transaction: {skim milk, wheat bread} Augmented Transaction: {skim milk, wheat bread, milk, bread, food} l Issues: –Items that reside at higher levels have much higher support counts ( 상위레벨의 아이템은 훨씬 더 높은 support counts )  if support threshold is low, too many frequent patterns involving items from the higher levels –Increased dimensionality of the data –Computation time of wider transactions –Redundant rules

Ch 7. Extended Association Rules 34 Multi-level Association Rules l Approach 2: –Generate frequent patterns at highest level first –Then, generate frequent patterns at the next highest level, and so on l Issues: –I/O requirements will increase dramatically because we need to perform more passes over the data –May miss some potentially interesting cross-level association patterns

Ch 7. Extended Association Rules 35 Sequential Patterns

Ch 7. Extended Association Rules 36 Sequence Data ObjectTimestampEvents A102, 3, 5 A206, 1 A231 B114, 5, 6 B172 B217, 8, 1, 2 B281, 6 C141, 8, 7 Sequence Database:

Ch 7. Extended Association Rules 37 Examples of Sequence Data Sequence Database SequenceElement (Transaction) Event (Item) CustomerPurchase history of a given customer A set of items bought by a customer at time t Books, diary products, CDs, etc Web DataBrowsing activity of a particular Web visitor A collection of files viewed by a Web visitor after a single mouse click Home page, index page, contact info, etc Event dataHistory of events generated by a given sensor Events triggered by a sensor at time t Types of alarms generated by sensors Genome sequences DNA sequence of a particular species An element of the DNA sequence Bases A,T,G,C Sequence E1 E2 E1 E3 E2 E3 E4 E2 Element (Transaction) Event (Item)

Ch 7. Extended Association Rules 38 Formal Definition of a Sequence l A sequence is an ordered list of elements (transactions) s = –Each element contains a collection of events (items) e i = {i 1, i 2, …, i k } –Each element is attributed to a specific time or location l Length of a sequence, |s|, is given by the number of elements of the sequence l A k-sequence is a sequence that contains k events (items)

Ch 7. Extended Association Rules 39 Examples of Sequence l Web sequence: l Sequence of initiating events causing the nuclear accident at 3-mile Island: ( l Sequence of books checked out at a library:

Ch 7. Extended Association Rules 40 Formal Definition of a Subsequence l A sequence is contained in another sequence (m ≥ n) if there exist integers i 1 < i 2 < … < i n such that a 1  b i1, a 2  b i2, …, a n  b in l The support of a subsequence w is defined as the fraction of data sequences that contain w l A sequential pattern is a frequent subsequence (i.e., a subsequence whose support is ≥ minsup) Data sequenceSubsequenceContain? Yes No Yes

Ch 7. Extended Association Rules 41 Counting Sequences (An example)

Ch 7. Extended Association Rules 42 Sequential Pattern Mining: Definition l Given: –a database of sequences –a user-specified minimum support threshold, minsup l Task: –Find all subsequences with support ≥ minsup

Ch 7. Extended Association Rules 43 Sequential Pattern Mining: Challenge l Given a sequence: –Examples of subsequences:,,, etc. l How many k-subsequences can be extracted from a given n-sequence? n = 9 k=4: Y _ _ Y Y _ _ _ Y

Ch 7. Extended Association Rules 44 Sequential Pattern Mining: Example Minsup = 50% Examples of Frequent Subsequences: 3/5 ☞ s=60% s=60% s=80% 4/5 ☞ s=80% s=80% s=60%

Ch 7. Extended Association Rules 45 Extracting Sequential Patterns l Given n events: i 1, i 2, i 3, …, i n l Candidate 1-subsequences:,,, …, l Candidate 2-subsequences:,, …,,, …, l Candidate 3-subsequences:,, …,,, …,,, …,,, …

Ch 7. Extended Association Rules 46

Ch 7. Extended Association Rules 47 Lecture Outline

Ch 7. Extended Association Rules 48 AprioriAll: The idea l Basic method to mine sequential patterns – Based on the Apriori algorithm – Count all the large sequences, including nonmaximal sequences – Use Apriori-generate function to generate candidate sequences: get candidates for pass using only the large (frequent) sequences found in the previous pass and make a pass over the data to find their support

Ch 7. Extended Association Rules 49 AprioriAll: The big picture l Five-phase algorithm 1. Sort phase: Create the sequence database from transactions. 2. Large itemset phase Find all frequent itemsets using Apriori 3. Transformation phase: Do integer mapping for large itemsets 4. Sequence phase: Find all frequent sequential patterns using Apriori. 5. Maximal phase: Eliminate non maximal sequences.

Ch 7. Extended Association Rules 50 AprioriAll Algorithm(1)

Ch 7. Extended Association Rules AprioriAll: Sequence Database Example Frequent sequence pattern ?

Ch 7. Extended Association Rules AprioriAll: Sort Phase Example

Ch 7. Extended Association Rules AprioriAll: Large itemset Phase

Ch 7. Extended Association Rules AprioriAll: Transformation Phase

Ch 7. Extended Association Rules AprioriAll: Sequence Phase

Ch 7. Extended Association Rules AprioriAll: Maximal phase

Ch 7. Extended Association Rules 57 Summary for AprioriAll l Algorithm wastes much time in counting nonmaximal sequences, which can not be sequential patterns l There are other variations of AprioriAll that reduce the candidates that are not maximals: AprioriSome and DynamicSome l Absence of time window constraints l AprioriALL is the basis of many efficient algorithm developed later. GSP is among them.

Ch 7. Extended Association Rules 58 Generalized Sequential Pattern (GSP) l Step 1: –Make the first pass over the sequence database D to yield all the 1- element frequent sequences l Step 2: Repeat until no new frequent sequences are found –Candidate Generation:  Merge pairs of frequent subsequences found in the (k-1)th pass to generate candidate sequences that contain k items –Candidate Pruning:  Prune candidate k-sequences that contain infrequent (k-1)-subsequences –Support Counting:  Make a new pass over the sequence database D to find the support for these candidate sequences –Candidate Elimination:  Eliminate candidate k-sequences whose actual support is less than minsup

Ch 7. Extended Association Rules 59 Candidate Generation l Base case (k=2): –Merging two frequent 1-sequences and will produce two candidate 2-sequences: and l General case (k>2): –A frequent (k-1)-sequence w 1 is merged with another frequent (k-1)-sequence w 2 to produce a candidate k-sequence if the subsequence obtained by removing the first event in w 1 is the same as the subsequence obtained by removing the last event in w 2  The resulting candidate after merging is given by the sequence w 1 extended with the last event of w 2. –If the last two events in w 2 belong to the same element, then the last event in w 2 becomes part of the last element in w 1 –Otherwise, the last event in w 2 becomes a separate element appended to the end of w 1

Ch 7. Extended Association Rules 60 Candidate Generation Examples l Merging the sequences w 1 = and w 2 = will produce the candidate sequence because the last two events in w 2 (4 and 5) belong to the same element l Merging the sequences w 1 = and w 2 = will produce the candidate sequence because the last two events in w 2 (4 and 5) do not belong to the same element l We do not have to merge the sequences w 1 = and w 2 = to produce the candidate because if the latter is a viable candidate, then it can be obtained by merging w 2 with + =>

Ch 7. Extended Association Rules 61 Generalized Sequential Pattern (GSP) Example Prune candidate k-sequences that contain infrequent (k-1)-subsequences

Ch 7. Extended Association Rules The GSP Algorithm l Benefits from the Apriori pruning –Reduces search space l Bottlenecks –Scans the database multiple times –Generates a huge set of candidate sequences There is a need for more efficient mining methods

Ch 7. Extended Association Rules The SPADE Algorithm l SPADE (Sequential PAttern Discovery using Equivalent Class) developed by Zaki 2001 l A vertical format sequential pattern mining method l A sequence database is mapped to a large set of Item: l Sequential pattern mining is performed by –growing the subsequences (patterns) one item at a time by Apriori candidate generation

Ch 7. Extended Association Rules The SPADE Algorithm

Ch 7. Extended Association Rules 65 Timing Constraints (I) {A B} {C} {D E} <= m s <= x g >n g x g : max-gap n g : min-gap m s : maximum span Data sequenceSubsequence maxgap = P, mingap = P maxgap = P, mingap = F maxgap = F, mingap = P maxgap = F, mingap = F maxgap = 3, mingap = 1 Window size U(s j+1 ) l(s j ) = U(s j+1 )-l(s j ) = l(s j+1 )-u(s j ) u(s j ) l(s j+1 ) U(s n ) l(s 1 )

Ch 7. Extended Association Rules 66 Mining Sequential Patterns with Timing Constraints l Approach 1: –Mine sequential patterns without timing constraints –Postprocess the discovered patterns l Approach 2: –Modify GSP to directly prune candidates that violate timing constraints –Question:  Does Apriori principle still hold?

Ch 7. Extended Association Rules 67 Apriori Principle for Sequence Data Suppose: x g = 1 (max-gap) n g = 0 (min-gap) m s = 5 (maximum span) minsup = 60% Problem exists because of max-gap constraint No such problem if max-gap is infinite

Ch 7. Extended Association Rules Timeline Object A: Object B: Object C: Object D: Object E: Suppose: xg = 1 (max-gap) ng = 0 (min-gap) ms = 5 (maximum span) minsup = 60% support = 40% But support = 60%

Ch 7. Extended Association Rules 69 Contiguous Subsequences( 인접하는 서브패턴 ) l s is a contiguous subsequence of w = … if any of the following conditions hold: 1.s is obtained from w by deleting an item from either e 1 or e k 2.s is obtained from w by deleting an item from any element e i that contains more than 2 items 3.s is a contiguous subsequence of s’ and s’ is a contiguous subsequence of w (recursive definition) l Examples: s = –is a contiguous subsequence of –is not a contiguous subsequence of

Ch 7. Extended Association Rules 70 Modified Candidate Pruning Step l Without maxgap constraint: –A candidate k-sequence is pruned if at least one of its (k-1)-subsequences is infrequent l With maxgap constraint: –A candidate k-sequence is pruned if at least one of its contiguous (k-1)-subsequences is infrequent

Ch 7. Extended Association Rules 71 Timing Constraints (II) {A B} {C} {D E} <= m s <= x g >n g <= ws x g : max-gap n g : min-gap ws: window size m s : maximum span Data sequenceSubsequenceContain? No Yes Yes x g = 2, n g = 0, ws = 1, m s = 5

Ch 7. Extended Association Rules 72 Modified Support Counting Step l Given a candidate pattern: –Any data sequences that contain, ( where time({c}) – time({a}) ≤ ws) (where time({a}) – time({c}) ≤ ws) will contribute to the support count of candidate pattern

Ch 7. Extended Association Rules 73 Other Formulation l In some domains, we may have only one very long time series –Example:  monitoring network traffic events for attacks  monitoring telecommunication alarm signals l Goal is to find frequent sequences of events in the time series –This problem is also known as frequent episode mining E1 E2 E1 E2 E1 E2 E3 E4 E3 E4 E1 E2 E2 E4 E3 E5 E2 E3 E5 E1 E2 E3 E1 Pattern:

General Support Counting Schemes Assume: x g = 2 (max-gap) n g = 0 (min-gap) ws = 0 (window size) m s = 2 (maximum span)

Ch 7. Extended Association Rules 75 Frequent Subgraph Mining l Extend association rule mining to finding frequent subgraphs l Useful for Web Mining, computational chemistry, bioinformatics, spatial data sets, etc

Ch 7. Extended Association Rules 76 Graph Definitions

Ch 7. Extended Association Rules 77 Representing Transactions as Graphs l Each transaction is a clique of items

Ch 7. Extended Association Rules 78 Representing Graphs as Transactions

Ch 7. Extended Association Rules 79 Challenges l Node may contain duplicate labels l Support and confidence –How to define them? l Additional constraints imposed by pattern structure –Support and confidence are not the only constraints –Assumption: frequent subgraphs must be connected l Apriori-like approach: –Use frequent k-subgraphs to generate frequent (k+1) subgraphs  What is k?

Ch 7. Extended Association Rules 80 Challenges… l Support: –number of graphs that contain a particular subgraph l Apriori principle still holds l Level-wise (Apriori-like) approach: –Vertex growing:  k is the number of vertices –Edge growing:  k is the number of edges

Ch 7. Extended Association Rules 81 Vertex Growing

Ch 7. Extended Association Rules 82 Edge Growing

Ch 7. Extended Association Rules 83 Apriori-like Algorithm l Find frequent 1-subgraphs l Repeat –Candidate generation  Use frequent (k-1)-subgraphs to generate candidate k-subgraph –Candidate pruning  Prune candidate subgraphs that contain infrequent (k-1)-subgraphs –Support counting  Count the support of each remaining candidate –Eliminate candidate k-subgraphs that are infrequent In practice, it is not as easy. There are many other issues

Ch 7. Extended Association Rules 84 Example: Dataset

Ch 7. Extended Association Rules 85 Example

Ch 7. Extended Association Rules 86 Candidate Generation l In Apriori: –Merging two frequent k-itemsets will produce a candidate (k+1)-itemset l In frequent subgraph mining (vertex/edge growing) –Merging two frequent k-subgraphs may produce more than one candidate (k+1)-subgraph

Ch 7. Extended Association Rules 87 Multiplicity of Candidates (Vertex Growing)

Ch 7. Extended Association Rules 88 Multiplicity of Candidates (Edge growing) l Case 1: identical vertex labels

Ch 7. Extended Association Rules 89 Multiplicity of Candidates (Edge growing) l Case 2: Core contains identical labels Core: The (k-1) subgraph that is common between the joint graphs

Ch 7. Extended Association Rules 90 Multiplicity of Candidates (Edge growing) l Case 3: Core multiplicity

Ch 7. Extended Association Rules 91 Adjacency Matrix Representation The same graph can be represented in many ways

Ch 7. Extended Association Rules 92 Graph Isomorphism l A graph is isomorphic if it is topologically equivalent to another graph

Ch 7. Extended Association Rules 93 Graph Isomorphism l Test for graph isomorphism is needed: –During candidate generation step, to determine whether a candidate has been generated –During candidate pruning step, to check whether its (k-1)-subgraphs are frequent –During candidate counting, to check whether a candidate is contained within another graph

Ch 7. Extended Association Rules 94 Graph Isomorphism l Use canonical labeling to handle isomorphism –Map each graph into an ordered string representation (known as its code) such that two isomorphic graphs will be mapped to the same canonical encoding –Example:  Lexicographically largest adjacency matrix String: Canonical: