False Positive or False Negative: Mining Frequent Itemsets from High Speed Transactional Data Streams Jeffrey Xu Yu , Zhihong Chong(崇志宏) , Hongjun Lu.

Slides:



Advertisements
Similar presentations
Recap: Mining association rules from large datasets
Advertisements

Frequent Itemset Mining Methods. The Apriori algorithm Finding frequent itemsets using candidate generation Seminal algorithm proposed by R. Agrawal and.
3/13/2012Data Streams: Lecture 161 CS 410/510 Data Streams Lecture 16: Data-Stream Sampling: Basic Techniques and Results Kristin Tufte, David Maier.
Association Rule Mining. 2 The Task Two ways of defining the task General –Input: A collection of instances –Output: rules to predict the values of any.
Adaptive Frequency Counting over Bursty Data Streams Bill Lin, Wai-Shing Ho, Ben Kao and Chun-Kit Chui Form CIDM07.
Resource-oriented Approximation for Frequent Itemset Mining from Bursty Data Streams SIGMOD’14 Toshitaka Yamamoto, Koji Iwanuma, Shoshi Fukuda.
Mining Frequent Patterns in Data Streams at Multiple Time Granularities CS525 Paper Presentation Presented by: Pei Zhang, Jiahua Liu, Pengfei Geng and.
Sampling Large Databases for Association Rules ( Toivenon’s Approach, 1996) Farzaneh Mirzazadeh Fall 2007.
Data Mining Association Analysis: Basic Concepts and Algorithms
Adaptive Load Shedding for Mining Frequent Patterns from Data Streams Xuan Hong Dang, Wee-Keong Ng, and Kok-Leong Ong (DaWaK 2006) 2008/3/191Yi-Chun Chen.
Rakesh Agrawal Ramakrishnan Srikant
COMP53311 Data Stream Prepared by Raymond Wong Presented by Raymond Wong
Association Analysis. Association Rule Mining: Definition Given a set of records each of which contain some number of items from a given collection; –Produce.
Mining Frequent Itemsets from Uncertain Data Presented by Chun-Kit Chui, Ben Kao, Edward Hung Department of Computer Science, The University of Hong Kong.
Data Mining Techniques So Far: Cluster analysis K-means Classification Decision Trees J48 (C4.5) Rule-based classification JRIP (RIPPER) Logistic Regression.
Data Mining Association Analysis: Basic Concepts and Algorithms Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach, Kumar Introduction.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
Heavy hitter computation over data stream
Data Mining Association Analysis: Basic Concepts and Algorithms
Data Mining Association Analysis: Basic Concepts and Algorithms
Maintenance of Discovered Association Rules S.D.LeeDavid W.Cheung Presentation : Pablo Gazmuri.
What ’ s Hot and What ’ s Not: Tracking Most Frequent Items Dynamically G. Cormode and S. Muthukrishman Rutgers University ACM Principles of Database Systems.
Association Rule Mining (Some material adapted from: Mining Sequential Patterns by Karuna Pande Joshi)‏
Performance and Scalability: Apriori Implementation.
An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets.
VLDB 2012 Mining Frequent Itemsets over Uncertain Databases Yongxin Tong 1, Lei Chen 1, Yurong Cheng 2, Philip S. Yu 3 1 The Hong Kong University of Science.
Approximate Frequency Counts over Data Streams Loo Kin Kong 4 th Oct., 2002.
Approximate Frequency Counts over Data Streams Gurmeet Singh Manku, Rajeev Motwani Standford University VLDB2002.
Modul 7: Association Analysis. 2 Association Rule Mining  Given a set of transactions, find rules that will predict the occurrence of an item based on.
Data Mining Association Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 6 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan,Steinbach,
False Positive or False Negative: Mining Frequent Itemsets from High Speed Transactional Data Streams Jeffrey Xu Yu, Zhihong Chong, Hongjun Lu, Aoying.
Data Stream Algorithms Ke Yi Hong Kong University of Science and Technology.
August 21, 2002VLDB Gurmeet Singh Manku Frequency Counts over Data Streams Frequency Counts over Data Streams Stanford University, USA.
CSE4334/5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai.
Detecting Group Differences: Mining Contrast Sets Author: Stephen D. Bay Advisor: Dr. Hsu Graduate: Yan-Cheng Lin.
Privacy-preserving rule mining. Outline  A brief introduction to association rule mining  Privacy preserving rule mining Single party  Perturbation.
Association Rule Mining Data Mining and Knowledge Discovery Prof. Carolina Ruiz and Weiyang Lin Department of Computer Science Worcester Polytechnic Institute.
Mining Frequent Itemsets from Uncertain Data Presenter : Chun-Kit Chui Chun-Kit Chui [1], Ben Kao [1] and Edward Hung [2] [1] Department of Computer Science.
Association Rule Mining
Space-Efficient Online Computation of Quantile Summaries SIGMOD 01 Michael Greenwald & Sanjeev Khanna Presented by ellery.
Security in Outsourced Association Rule Mining. Agenda  Introduction  Approximate randomized technique  Encryption  Summary and future work.
An Efficient Rigorous Approach for Identifying Statistically Significant Frequent Itemsets Adam Kirsch, Michael Mitzenmacher, Havard University Andrea.
1 Data Mining Lecture 6: Association Analysis. 2 Association Rule Mining l Given a set of transactions, find rules that will predict the occurrence of.
Improvement of Apriori Algorithm in Log mining Junghee Jaeho Information and Communications University,
Mining of Massive Datasets Ch4. Mining Data Streams.
1 Maintaining Data Privacy in Association Rule Mining Speaker: Minghua ZHANG Oct. 11, 2002 Authors: Shariq J. Rizvi Jayant R. Haritsa VLDB 2002.
Discrete Methods in Mathematical Informatics Kunihiko Sadakane The University of Tokyo
Discrete Methods in Mathematical Informatics Kunihiko Sadakane The University of Tokyo
Frequency Counts over Data Streams
Reducing Number of Candidates
Data Mining Association Analysis: Basic Concepts and Algorithms
Frequent Pattern Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Association Rule Mining
Data Mining Association Analysis: Basic Concepts and Algorithms
Mining Frequent Itemsets over Uncertain Databases
Association Rule Mining
A Parameterised Algorithm for Mining Association Rules
Data Mining Association Analysis: Basic Concepts and Algorithms
Farzaneh Mirzazadeh Fall 2007
COMP5331 FP-Tree Prepared by Raymond Wong Presented by Raymond Wong
Approximate Frequency Counts over Data Streams
CSCI B609: “Foundations of Data Science”
FP-Growth Wenlong Zhang.
Department of Computer Science National Tsing Hua University
By: Ran Ben Basat, Technion, Israel
Maintaining Frequent Itemsets over High-Speed Data Streams
DENSE ITEMSETS JOUNI K. SEPPANEN, HEIKKI MANNILA SIGKDD2004
Association Analysis: Basic Concepts
Dynamically Maintaining Frequent Items Over A Data Stream
Presentation transcript:

False Positive or False Negative: Mining Frequent Itemsets from High Speed Transactional Data Streams Jeffrey Xu Yu , Zhihong Chong(崇志宏) , Hongjun Lu , Aoying Zhou VLDB 2004

Introduction Mining data stream: Data items arrive continuously One scan of data Limited memory Bounded error

Introduction In this paper, develop algorithm of effectively mining frequent itemset with bound of memory consumption Use false-negative

False Positive Most existing algorithm of mining frequent itemset are false-positive oriented Control memory consumption by error parameter ε Allow item’s support below min support s but above s –ε as frequent Approximate frequency counts over data streams (VLDB 02)

False Positive Memory bound : O ( .log (εN)) Dilemma of false-positive approach ε smaller, less # of false-positive item included Memory consumption increase reciprocally in terms of ε In Apriori, k-th frequent itemset generate (k+1)-th candidate itemset

False Positive & False Negative All itemsets will output All itemsets will output S + ε Some will output s Some will output S - ε False Positive False Negative

False Negative Error control and pruning ε : prune data, control error bound, changeable ε decrease and approach to zero when # of observation increase ε reciprocal of n s : minimum support n : # of observation

False Negative Memory control δ : reliability, instead ε control memory consumption Memory consumption related to ln(1/ δ) In this approach not allow 1-itemsets with support below s as frequent

Comparison: False Positive & False Negative Recall and Precision A : true frequent itemsets B : obtained frequent itemsets Recall = Precision = |A∩B| |A| |A∩B| |B|

Comparison: False Positive δ=0.1 S(%) True Size Mined Size Recall Precision 0.08 21,361 126,307 1.00 0.17 0.10 12,252 68,275 0.18 0.20 2,359 23,154 0.16

Comparison: False Negative s+ε: minimum support S (%) True Size Mined Size Recall Precision 0.08 21,361 18,351 0.86 1.00 0.10 12,252 10,411 0.85 0.20 2,359 1,739 0.74

Chernoff Bound Chernoff Bound give certain probabilistic guarantee on estimation of statistics about underlying data Pr{ T ≥ еE[T]} ≤ е-E[T] For example : Pick a lottery number 0000,0001, …,9999. 1,000,000 people buy $1 ticket E[#winners] = 100 Pr{T≧273} ≦ e-100

Chernoff Bound Bernolli trails (coin flips): for any γ> 0 Pr[oi=1]=p, Pr[oi=0]=1-p r : # of heads in n coin flips np: expectation of r for any γ> 0

Chernoff Bound Let r as r/n, min support s as p Replace sγ with ε Right of equation be δ Pr{|RunningSupport – TrueSupport|≥εn } ≤δ

Frequent or Infrequent A pattern X is potential infrequent if count(X) / n < s –εn in terms of n A pattern X is potential frequent if it is not potential infrequent in terms of n

FDPM-1(s, δ)

FDPM-1(s, δ) Delete infrequent items A D B C Memory is full Count D B C 1 2 1 1 Memory is full Compute new εn C D A Source

FDPM-1(s, δ) Algorithm ensure : item whose true frequency exceeds sN are output with probability of at least 1-δ No item whose true frequency is less than sN are output Probability of the estimated support that equal true support no less than 1-δ

Memory Bound Sup(X) ≥ ( s – εn) n |P| ≤ 1/( s – εn), when s – εn>0 |P| = n = = n = 1 S – εn

FDPM-2(s, δ)

Mining Frequent Itemsets from a Data Stream {B} {AB} … Count {C} {F} {EF} {D} {E} 5 F 13 10 9 8 6 3 6 4 3 Mining Frequent Itemsets {A,B} …….. {E,F,G} Memory is full, compute new εn Delete infrequent itemsets n1 Item Set {A} {B} {AB} {E} {F} {EF} Count 4 5 6 …… P Source

Conclusion False negative Limited memory Error bound with some probability