Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets.

Similar presentations


Presentation on theme: "An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets."— Presentation transcript:

1 An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets

2 Data Mining Discover hidden patterns, correlations, association rules, etc., in large data sets When is the discovery interesting, important, significant? We develop rigorous mathematical/statistical approach

3 Frequent Itemsets D Dataset D of transactions t j (subsets) of a base set of items I, (t j ⊆ 2 I ). Support of an itemsets X = number of transactions that contain X. support({Beer,Diaper}) = 3

4 Frequent Itemsets Discover all itemsets with significant support. Fundamental primitive in data mining applications support({Beer,Diaper}) = 3

5 Significance What support level makes an itemset significantly frequent? Minimize false positive and false negative discoveries Improve “quality” of subsequent analyses How to narrow the search to focus only on significant itemsets? Reduce the possibly exponential time search

6 Statistical Model Input: D = a dataset of t transactions over |I|=n For i ∊ I, let n(i) be the support of {i} in D. f i = n(i)/t = frequency of i in D H 0 Model: D = a dataset of t transactions, |I|=n Item i is included in transaction j with probability f i independent of all other events.

7 Statistical Tests H 0 : null hypothesis – the support of no itemset is significant with respect to D H 1 : alternative hypothesis, the support of itemsets X 1, X 2, X 3,… is significant. It is unlikely that their support comes from the distribution of D Significance level: α = Prob( rejecting H 0 when it’s true )

8 Naïve Approach Let X={x 1,x 2,…x r }, f x =∏ j f j, probability that a given itemset is in a given transaction s x = support of X, distributed s x ∼ B(t, f x ) Reject H 0 if: Prob(B(t, f x ) ≥ s x ) = p-value ≤ α

9 Naïve Approach Variations: R=support /E[support in D ] R=support - E[support in D ] Z-value = (s-E[s])/ ϭ [s] many more…

10 D has 1,000,000 transactions, over 1000 items, each item has frequency 1/1000. We observed that a pair { i,j } appears 7 times, is this pair statistically significant? In D (random dataset): E[ support( { i,j } ) ] = 1 Prob( { i,j } has support ≥ 7 ) ≃ 0.0001 p-value 0.0001 - must be significant! What’s wrong? – example

11 There are 499,500 pairs, each has probability 0.0001 to appear in 7 transactions in D The expected number of pairs with support ≥ 7 in D is ≃ 50, not such a rare event! Many false positive discoveries (flagging itemsets that are not significant) Need to correct for multiplicity of hypothesis.

12 Multi-Hypothesis test Testing for significant itemsets of size k involves testing simultaneously for m= null hypotheses. H 0 (X) = support of X conforms with D s x = support of X, distributed: s x ∼ B(t, f x ) How to combine m tests while minimizing false positive and negative discoveries?

13 Family Wise Error Rate (FWER) Family Wise Error Rate (FWER) = probability of at least one false positive (flagging a non-significant itemset as significant) Bonferroni method (union bound) – test each null hypothesis with significance level α / m Too conservative – many false negative – does not flag many significant itemsets.

14 False Discovery Rate (FDR) Less conservative approach V= number of false positive discoveries R= total number of rejected null hypothesis = number itemsets flagged as significant Test with level of significance α : reject maximum number of null hypothesis such that FDR ≤ α FDR = E[V/R] (FDR=0 when R=0)

15 Standard Multi-Hypothesis test

16 Less conservative than Bonferroni method: i α/m VS α/m For m=, still needs very small individual p-value to reject an hypothesis

17 Alternative Approach Q (k, s i ) = observed number of itemsets of size k and support ≥ s i p-value = the probability of Q (k, s i ) in D Fewer hypothesis How to compute the p-value? What is the distribution of the number of itemsets of size k and support ≥ s i in D ?

18 Simulations to estimate the probabilities Choose a data set at random and count Main problem: m = small probabilities to reject hypothesis a lot of simulations to estimate probabilities Permutation Test

19 Main Contributions Poisson approximation: let Q k,s = number of itemsets of size k and support s in D (random dataset), for s≥s min : Q k,s is well approximate by a Poisson distribution. Based on the Poisson approximation – a powerful FDR multi-hypothesis test for significant frequent itemsets.

20 Chen-Stein Method A powerful technique for approximating the sum of dependent Bernoulli variables. For an itemset X of k items let Z X =1 if X has support at least s, else Z X =0 Q k,s = ∑ X Z X (X of k items) U~Poisson( λ ) I(x)= {Y | |y|=k, Y ˄ X ≠ empty},

21 Chen-Stein Method (2)

22 Approximation Result Q k,s is well approximate by a Poisson distribution for s≥s min

23 Monte-Carlo Estimate To determine s min for a given set of parameters (n,t,f i ): Choose m random datasets with the required parameters. For each dataset extract all itemsets with support at least s’ (≤ s min ) Find the minimum s such that Prob(b 1 (s)+b 2 (s) ≤ ε) ≥ 1- δ

24 New Statistical Test Instead of testing the significance of the support of individual itemsets we test the significance of the number of itemsets with a given support The null hypothesis distribution is specified by the Poisson approximation result Reduces the number of simultaneous tests More powerful test – less false negatives

25 Test I Define α 1, α 2, α 3, … such that ∑ α i ≤ α For i=0,…,log (s max – s min ) +1 s i = s min +2 i Q (k, s i ) = observed number of itemsets of size k and support ≥ s i H 0 (k,s i ) = “ Q (k,s i ) conforms with Poisson( λ i )” Reject H 0 (k,s i ) if p-value < α i

26 Test I Let s* be the smallest s such that H 0 (k,s) rejected by Test I With confidence level α the number of itemsets with support ≥ s* is significant Some itemsets with support ≥ s* could still be false positive

27 Test II Define β 1, β 2, β 3,… such that ∑ β i ≤ β Reject H 0 (k,s i ) if: p-value < α i and Q(k,s i )≥ λ i / β i Let s* be the minimum s such that H 0 (k,s) was rejected If we flag all itemsets with support ≥ s* as significant, FDR ≤ β

28 Proof V i = false discoveries if H 0 (k,s i ) first rejected E i = “H 0 (k,s i ) rejected”

29 Real Datasets FIMI repository http://fimi.cs.helsinki.fi/data/ standard benchmarks m = avg. transaction length

30 Experimental Results Poisson approximation Poisson “regime” ≠ no itemsets expected

31 Experimental Results Poisson approximation not approximating the p-values of itemsets as hypothesis (small!) finding the minimum s such that: Prob(b 1 (s)+b 2 (s) ≤ ε) ≥ 1- δ fewer simulations less time per simulation (“few” itemsets)

32 Experimental Results Test II: α = 0.05, β = 0.05 R k,s* = num. itemsets of size k with support ≥ s* Itemset of size 154 with support ≥ 7

33 Experimental Results Standard Multi-Hypothesis test: β = 0.05 R = size output Standard Multi-Hypothesis test R k,s* = size output Test II

34 Collaborators Adam Kirsch (Harvard) Michael Mitzenmacher ( Harvard ) Andrea Pietracaprina ( U. of Padova ) Geppino Pucci ( U. of Padova ) Eli Upfal ( Brown U. ) Fabio Vandin ( U. of Padova )


Download ppt "An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets."

Similar presentations


Ads by Google