Download presentation
Presentation is loading. Please wait.
1
Stream Data Introduction or “Stream Data in 30 minutes or less…” Magdiel Galán CSE591: DataMining Dr. Huan Liu Spring 2004
2
Outline Streaming Data description Uses/Applications Problems/Challenges Main Concepts variance & k-means aging & sliding windows algorithms References
3
Warning… Disclaimer Just Quick/Short tour on Data Stream concepts Although some algorithms, proof equations are included on the presentation, they are for reference and will be briefly discussed during class presentation.
4
Sizing the challenge WalMart Records 20 Million Transactions Google Handles 100 Million Searches AT&T produces 275 million call records Earth sensing satellite produces GBs of data This just in a day!
5
Characteristics/Description Stream data sets are… Continuous Massive Unbounded Possibly infinite Fast changing and requires fast, real-time response
6
Example: Network Management Application Network Management involves monitoring and configuring network hardware and software to ensure smooth operation Monitor link bandwidth usage, estimate traffic demands Quickly detect faults, congestion and isolate root cause Load balancing, improve utilization of network resources AT&T collects 100 GBs of NetFlow data each day! from [GGR02]
7
Network Management Application (cont.) Network Operations Center Network Measurements Alarms from [GGR02]
8
Uses/Applications Banking/Stocks/Financials credit card fraud detection stock trends monitoring Sensors power grid balancing engine controls collision avoidance driver sleep monitor
9
Problems/Challenges ‘Zillions of data Continuous/Unbounded Examples arrive faster than they can be mined Application may require fast, real-time response Examples: life threatening: collision avoidance lost revenue/transactions: hung-up networks
10
Problems/Challenges Time/Space constrained Not enough memory Can’t afford storing/revisiting the data Single pass computation External memory algorithms for handling data sets larger than main memory cannot be used. Do not support continuous queries Too slow real-time response
11
Problems/Challenges In summary… Can’t stop to smell the roses… Only one chance/single pass/look at the data
12
Problems/Challenges Other Considerations Classical algorithms (i.e. CART, C4.5) do not scale up to data stream [DH00] Most need entire data set for analysis Random access (or multiple passes) to the data Difficult to compute answers accurately with limited memory With probability at least 1 - , algorithms compute an approximate answer within a factor of the actual answer Noise (bad sensors, outliers) Aging/Old/Stale data
13
Computation Model Stream Processing Engine (Approximate) Answer Data Streams Synopsis in Memory Decision Making from [GGR02]
14
Model Components Synopsis Summary of the data Samples, Histograms Processing Engine Implementation/Management System STREAM (Stanford): general-purpose Aurora (Brown/MIT): sensor monitoring, dataflow Telegraph (Berkeley): adaptive engine for sensors Decision Making Apply Data Mining techniques Decision Trees, Clusters, Association Rules
15
Model Components The remaining of the slides will focus on Decision Making and Synopsis Calculation
16
Synopsis: Dealing with Time/Space Constraints Since data can’t be contained, or revisited, the best alternative is to summarize what has been seen. Basic stream synopsis computation Random Sampling: Generate statistics using a representative sample of the data Histograms: Distribution/Grouping data representation Wavelets: Mathematical tool for hierarchical decomposition of functions/signals For this discusion, will focus on Histograms
17
Types of Histograms Equi-Depth Element counts per bucket are kept constant V-Optimal Minimize frequency variance within buckets Exponential Histograms (EH) Bucket sizes are non-decreasing powers of 2 Size: Total number of 1’s in the bucket. For every bucket other than the last bucket, there are at least k/2 and at most k/2+1 buckets of that size Example: k=4: (1,1,2,2,2,4,4,4,8,8,..) Essential component of “sliding windows” technique addressing “aging” data.
18
Equi-Depth[GGR02] V-Optimal[GGR02] Exponential Histograms (EH) Notice the total count of elements from [GGR02]
19
Sliding Windows Technique Background: Some applications rely on ALL historical data But for most applications, OLD data is considered less relevant and could skew results from NEW trends or conditions new processes/procedures new hardware/sensors new fashion trends
20
Sliding Windows (cont.) Common approaches addressing Old data: Aging Model [BDMO03] elements are associated with “weights” that decrease over time may use some exponential decay formulas Sliding Windows Model Only last “N” elements are considered Incorporate examples as they arrive The record “expires” at time t+N (N is the window length) Count only the “1’s” in bit-stream data
21
Sliding Windows Description Sliding Windows Approach (pseudo-pseudo code) Consider only the last N elements. Define k=1/ε, and approximate k/2 to nearest integer. Time Stamp each “1” that arrives in the stream and insert into a first bucket, shifting any initial ones. First bucket value is “1” since there is only one “1” If the number of buckets with same value exceeds k/2 +2, merge the oldest buckets, but keeping at least k/2 buckets of the same value Merging creates a new bucket with size equal to the sum Eliminate last bucket if its last 1 time stamp exceeds N
22
Sliding Window (SW) Model ….1 0 1 0 0 0 1 0 1 1 1 1 1 1 0 0 0 1 0 1 0 0 1 1… Time Increases Current Time Window Size N = 7 from [BDMO03]
23
Example Run Assume k/2 = 2 32,16,8,8,4,4,2,1,1
24
Example Run Assume k/2 = 2 32,16,8,8,4,4,2,1,1 1 Data Stream segment could be “00010”
25
Example Run Assume k/2 = 2 32,16,8,8,4,4,2,1,1,1 32,16,8,8,4,4,2,2,1 Merge! Merged!
26
Example Run Assume k/2 = 2 32,16,8,8,4,4,2,1,1 32,16,8,8,4,4,2,2,1 32,16,8,8,4,4,2,2,1,1 32,16,16,8,4,2,1 * Example from [DGIM02]/[GGR02]
27
Example Run Assume k/2 = 2 32,16,8,8,4,4,2,1,1 32,16,8,8,4,4,2,2,1 32,16,8,8,4,4,2,2,1,1 32,16,16,8,4,2,1 Keep in mind, the values represent TOTAL 1’s
28
Statistics Over Sliding Windows Bitstream: Count the number of ones [DGIM02] Exact solution: Θ(N) bits Algorithm BasicCounting: 1 + ε approximation (relative error!) Space: O(1/ε (log 2 N)) bits Time: O(log N) worst case, O(1) amortized per record Lower Bound: Space: Ω(1/ε (log 2 N)) bits
29
Complexity Number of buckets m: m [# of buckets of size j]*[# of different bucket sizes] (k/2 +1) * ((log(2N/k)+1) = O(k* log(N)) Each bucket requires O(log N) bits. Total memory: O(k log 2 N) = O(1/ε * log 2 N) bits from [BDMO03]
30
Benefits of Sliding Windows Incorporates new elements as they appear. Easy to calculate statistics over data streams with respect to the last N elements based on the histogram. Can estimate the number of 1’s within a factor of (1 + ε) using only θ((1/ε)(log 2 N)) bits of memory. from [BDMO03]
31
Expansion of Sliding Windows The original Sliding Window Method was not fully applicable to two important statistics during the “merging” of the buckets: k-median and variance A solution was devised by Babcock, Datar, Motwani and O’Callaghan [BDMO02] Their work derived a methodology for Variance, that was also applied for k-medians.
32
Variance and k-Medians Variance: Σ(x i – μ) 2, μ = Σ x i /N k-median clustering: Given: N points (x 1… x N ) in a metric space Find k points C = {c 1, c 2, …, c k } that minimize Σ d(x i, C) (the assignment distance) Clustering to be covered in detail future presentation from [BDMO03]
33
Notation V i = Variance of the i th bucket n i = number of elements in i th bucket μ i = mean of the i th bucket B1B1 B m B2B2 ……………… Current window, size = N B m-1 from [BDMO03]
34
Variance – composition B i,j = concatenation of buckets i and j from [BDMO03]
35
Decision Making The problem of addressing time changing data had also significant influence on decision algorithms. Pedro Domingos, who had originally developed a successful decision table algorithm (VFDT), also conceptualized the need to work with recent data, resulting in a new algorithm known as CVFDT. VFDT - Very Fast Decision Tree CVFDT - Concept Drift Very Fast Decision Tree Implemented a window approach
36
Decision Making Both VFDT and CVFDT make use of a statistical result known as Hoeffding * bound Used to estimate the minimum number of necessary examples needed to make a decision for a node in a decision tree. This is the key concept for these algorithms to work. * W.Hoefding, Probability Inequalities sums bounded Variables, Journal American Statistics Association, 1963
37
Hoeffding Bound random variable a whose range is R n independent observations of a; Mean: ā Hoeffding bound states: With probability 1- , the true mean of a is at least ā - , where from [DH00]/[HSD01]
38
Hoeffding Bound Significance… This estimate/bound is incorporated into an ID3 type decision tree, hence VFDT/CVFDT The information gain is evaluated against
39
VFDT Algorithm from [DH00]
40
VFDT Algorithm Results from [DH00]
41
CVFDT vs. VFDT CVFDT is an extension to VFDT that incorporated “windowing” CFVDT concept: Generate tree as regular but using a window of “w” elements. Monitor changes in gain for attributes. If changes, generate alternate subtree with new “best” attribute, but keep on background. Replace if new subtree becomes more accurate.
42
The END – “El Final” Concepts Covered: Data Streams Constraints (time/space) Data Streams Model Synopsis Decision Maker Histograms Exponential Histogram Sliding Windows Variance Hoeffding Bounds Decision Tree Classifier
43
References [BDMO03] B. Babcock, M. Datar, R. Motwani, and J. L. O’Callaghan. “Maintaining Variance and k-Medians over Data Stream Windows”. ACM PODS, 2003. http://citeseer.nj.nec.com/591910.html http://www.stanford.edu/~babcock/papers/pods03.ppt [DH00] P. Domingos and G. Hulten. “Mining High-Speed Data Streams”. ACM KDD, 2000. http://citeseer.nj.nec.com/domingos00mining.html [HSD01] G. Hulten, L. Spencer and P. Domingos. “Mining Time-Changing Data Streams”. ACM KDD, 2001. http://citeseer.nj.nec.com/hulten01mining.html [DGIM02] Mayur Datar, Aristides Gionis, Piotr Indyk and Rajeev Motwani. “Maintaining Stream Statistics over Sliding Windows” ACM-SIAM SODA 2002. http://www.stanford.edu/~babcock/papers/pods03.ppt [GGR02] Minos Garofalakis, Johannes Gehrke and Rajeev Rastogi. “Querying and Mining Data Streams: You Only Get One Look”. SIGMOD 2002 (tutorial). http://www.bell-labs.com/user/minos/Talks/streams-tutorial02.ppt
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.