The Stream Model Sliding Windows Counting 1’s

Slides:



Advertisements
Similar presentations
Sampling From a Moving Window Over Streaming Data Brian Babcock * Mayur Datar Rajeev Motwani * Speaker Stanford University.
Advertisements

Mining Data Streams (Part 1)
Mining of Massive Datasets Jure Leskovec, Anand Rajaraman, Jeff Ullman Stanford University Note to other teachers and users of these.
Data Mining of Very Large Data
Ariel Rosenfeld Network Traffic Engineering. Call Record Analysis. Sensor Data Analysis. Medical, Financial Monitoring. Etc,
Maintaining Variance and k-Medians over Data Stream Windows Brian Babcock, Mayur Datar, Rajeev Motwani, Liadan O’Callaghan Stanford University.
Maintaining Variance over Data Stream Windows Brian Babcock, Mayur Datar, Rajeev Motwani, Liadan O ’ Callaghan, Stanford University ACM Symp. on Principles.
Mining Data Streams.
COMP53311 Data Stream Prepared by Raymond Wong Presented by Raymond Wong
CSC1016 Coursework Clarification Derek Mortimer March 2010.
Extension of DGIM to More Complex Problems
Disk Access Model. Using Secondary Storage Effectively In most studies of algorithms, one assumes the “RAM model”: –Data is in main memory, –Access to.
Hash Table indexing and Secondary Storage Hashing.
CURE Algorithm Clustering Streams
1 Distributed Streams Algorithms for Sliding Windows Phillip B. Gibbons, Srikanta Tirthapura.
FALL 2006CENG 351 Data Management and File Structures1 External Sorting.
This material in not in your text (except as exercises) Sequence Comparisons –Problems in molecular biology involve finding the minimum number of edit.
A survey on stream data mining
Tirgul 6 B-Trees – Another kind of balanced trees Problem set 1 - some solutions.
Using Secondary Storage Effectively In most studies of algorithms, one assumes the "RAM model“: –The data is in main memory, –Access to any item of data.
B + -Trees (Part 1) COMP171. Slide 2 Main and secondary memories  Secondary storage device is much, much slower than the main RAM  Pages and blocks.
1 Mining Data Streams The Stream Model Sliding Windows Counting 1’s.
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
1 Mining Data Streams The Stream Model Sliding Windows Counting 1’s.
1 Still More Stream-Mining Frequent Itemsets Elephants and Troops Exponentially Decaying Windows.
Cloud and Big Data Summer School, Stockholm, Aug., 2015 Jeffrey D. Ullman.
Cloud and Big Data Summer School, Stockholm, Aug Jeffrey D. Ullman.
Hellman’s TMTO 1 Hellman’s TMTO Attack. Hellman’s TMTO 2 Popcnt  Before we consider Hellman’s attack, consider simpler Time-Memory Trade-Off  “Population.
Rounding Off Whole Numbers © Math As A Second Language All Rights Reserved next #5 Taking the Fear out of Math.
Approximating Hit Rate Curves using Streaming Algorithms Nick Harvey Joint work with Zachary Drudi, Stephen Ingram, Jake Wires, Andy Warfield TexPoint.
IT253: Computer Organization
1 CSE 326: Data Structures: Hash Tables Lecture 12: Monday, Feb 3, 2003.
CMSC 341 B- Trees D. Frey with apologies to Tom Anastasio.
Marwan Al-Namari Hassan Al-Mathami. Indexing What is Indexing? Indexing is a mechanisms. Why we need to use Indexing? We used indexing to speed up access.
CS 232: Computer Architecture II Prof. Laxmikant (Sanjay) Kale.
Mining of Massive Datasets Ch4. Mining Data Streams
Searching Topics Sequential Search Binary Search.
COMP091 – Operating Systems 1 Memory Management. Memory Management Terms Physical address –Actual address as seen by memory unit Logical address –Address.
Continuous Monitoring of Distributed Data Streams over a Time-based Sliding Window MADALGO – Center for Massive Data Algorithmics, a Center of the Danish.
Mining Data Streams (Part 1)
The Study of Computer Science Chapter 0
What about streaming data?
Frequency Counts over Data Streams
The Stream Model Sliding Windows Counting 1’s
Web-Mining Agents Stream Mining
Mining Frequent Patterns from Data Streams
COMP9313: Big Data Management Lecturer: Xin Cao Course web site:
The Study of Computer Science Chapter 0
Mining Data Streams The Stream Model Sliding Windows Counting 1’s Exponentially Decaying Windows Usually, we mine data that sits somewhere in a database.
Streaming & sampling.
Chapter 12: Query Processing
Chapter 5 Sampling Distributions
Mining Data Streams (Part 1)
COMP9313: Big Data Management Lecturer: Xin Cao Course web site:
Counting How Many Elements Computing “Moments”
The Study of Computer Science
Mining Data Streams (Part 2)
Mining Data Streams Some of these slides are based on Stanford Mining Massive Data Sets Course slides at
B- Trees D. Frey with apologies to Tom Anastasio
B- Trees D. Frey with apologies to Tom Anastasio
Range-Efficient Computation of F0 over Massive Data Streams
B- Trees D. Frey with apologies to Tom Anastasio
The Study of Computer Science
Data Intensive and Cloud Computing Data Streams Lecture 10
Introduction to Stream Computing and Reservoir Sampling
CENG 351 Data Management and File Structures
Software Development Techniques
The Study of Computer Science Chapter 0
Maintaining Stream Statistics over Sliding Windows
Counting Bits.
Presentation transcript:

The Stream Model Sliding Windows Counting 1’s Mining Data Streams The Stream Model Sliding Windows Counting 1’s

The Stream Model Data enters at a rapid rate from one or more input ports. The system cannot store the entire stream. How do you make critical calculations about the stream using a limited amount of (secondary) memory?

Queries Processor . . . 1, 5, 2, 7, 0, 9, 3 . . . a, r, v, t, y, h, b . . . 0, 0, 1, 0, 1, 1, 0 time Streams Entering Output Limited Storage

Applications --- (1) In general, stream processing is important for applications where New data arrives frequently. Important queries tend to ask about the most recent data, or summaries of data.

Applications --- (2) Mining query streams. Mining click streams. Google wants to know what queries are more frequent today than yesterday. Mining click streams. Yahoo wants to know which of its pages are getting an unusual number of hits in the past hour.

Applications --- (3) Sensors of all kinds need monitoring, especially when there are many sensors of the same type, feeding into a central controller, most of which are not sensing anything important at the moment. Telephone call records summarized into customer bills.

Applications --- (4) Intelligence-gathering. Like “evil-doers visit hotels” at beginning of course, but much more data at a much faster rate. Who calls whom? Who accesses which Web pages? Who buys what where?

Sliding Windows A useful model of stream processing is that queries are about a window of length N --- the N most recent elements received. Interesting case: N is still so large that it cannot be stored on disk. Or, there are so many streams that windows for all cannot be stored.

q w e r t y u i o p a s d f g h j k l z x c v b n m Past Future

Counting Bits --- (1) Problem: given a stream of 0’s and 1’s, be prepared to answer queries of the form “how many 1’s in the last k bits?” where k ≤ N. Obvious solution: store the most recent N bits. When new bit comes in, discard the N +1st bit.

Counting Bits --- (2) You can’t get an exact answer without storing the entire window. Real Problem: what if we cannot afford to store N bits? E.g., we are processing 1 billion streams and N = 1 billion, but we’re happy with an approximate answer.

Something That Doesn’t (Quite) Work Summarize exponentially increasing regions of the stream, looking backward. Drop small regions when they are covered by completed larger regions.

Example We can construct the count of the last N bits, except we’re Not sure how many of the last 6 are included. 6 10 4 ? 3 2 2 1 1 0 1 0 0 1 1 1 0 0 0 1 0 1 0 0 1 0 0 0 1 0 1 1 0 1 1 0 1 1 1 0 0 1 0 1 0 1 1 0 0 1 1 0 1 0 N

What’s Good? Stores only O(log2N ) bits. Easy update as more bits enter. Error in count no greater than the number of 1’s in the “unknown” area.

What’s Not So Good? As long as the 1’s are fairly evenly distributed, the error due to the unknown region is small --- no more than 50%. But it could be that all the 1’s are in the unknown area at the end. In that case, the error is unbounded.

Fixup Instead of summarizing fixed-length blocks, summarize blocks with specific numbers of 1’s. Let the block “sizes” (number of 1’s) increase exponentially. When there are few 1’s in the window, block sizes stay small, so errors are small.

DGIM* Method Store O(log2N ) bits per stream. Gives approximate answer, never off by more than 50%. Error factor can be reduced to any fraction > 0, with more complicated algorithm and proportionally more stored bits. *Datar, Gionis, Indyk, and Motwani

Timestamps Each bit in the stream has a timestamp, starting 1, 2, … Record timestamps modulo N (the window size), so we can represent any relevant timestamp in O(log2N ) bits.

Buckets A bucket in the DGIM method is a record consisting of: The timestamp of its end [O(log N ) bits]. The number of 1’s between its beginning and end [O(log log N ) bits]. Constraint on buckets: number of 1’s must be a power of 2. That explains the log log N in (2).

Representing a Stream by Buckets Either one or two buckets with the same power-of-2 number of 1’s. Buckets do not overlap in timestamps. Buckets are sorted by size (# of 1’s). Earlier buckets are not smaller than later buckets. Buckets disappear when their end-time is > N time units in the past.

Example At least 1 of size 16. Partially beyond window. 2 of size 8 1001010110001011010101010101011010101010101110101010111010100010110010 N

Updating Buckets --- (1) When a new bit comes in, drop the last (oldest) bucket if its end-time is prior to N time units before the current time. If the current bit is 0, no other changes are needed.

Updating Buckets --- (2) If the current bit is 1: Create a new bucket of size 1, for just this bit. End timestamp = current time. If there are now three buckets of size 1, combine the oldest two into a bucket of size 2. If there are now three buckets of size 2, combine the oldest two into a bucket of size 4. And so on…

Example 1001010110001011010101010101011010101010101110101010111010100010110010 0010101100010110101010101010110101010101011101010101110101000101100101 0010101100010110101010101010110101010101011101010101110101000101100101 0101100010110101010101010110101010101011101010101110101000101100101101 0101100010110101010101010110101010101011101010101110101000101100101101 0101100010110101010101010110101010101011101010101110101000101100101101

Querying To estimate the number of 1’s in the most recent N bits: Sum the sizes of all buckets but the last. Add in half the size of the last bucket. Remember, we don’t know how many 1’s of the last bucket are still within the window.

Error Bound Suppose the last bucket has size 2k. Then by assuming 2k -1 of its 1’s are still within the window, we make an error of at most 2k -1. Since there is at least one bucket of each of the sizes less than 2k, the true sum is no less than 2k -1. Thus, error at most 50%.

Extensions (For Thinking) How to improve the precision of the result? Can we use the same trick to answer queries “How many 1’s in the last k ?” where k < N ? Can we handle the case where the stream is not bits, but integers, and we want the sum of the last k ?

Application: Counting Items Problem: given a stream, which items appear more than s times in the window? Possible solution: think of the stream of baskets as one binary stream per item. 1 = item present; 0 = not present. Use DGIM to estimate counts of 1’s for all items.

Pros and Cons In principle, you could count frequent pairs or even larger sets the same way. One stream per itemset. Drawbacks: Only approximate. Number of itemsets is way too big.