Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Streams and Applications in Computer Science David Woodruff IBM Almaden Presburger lecture, ICALP, 2014.

Similar presentations


Presentation on theme: "Data Streams and Applications in Computer Science David Woodruff IBM Almaden Presburger lecture, ICALP, 2014."— Presentation transcript:

1 Data Streams and Applications in Computer Science David Woodruff IBM Almaden Presburger lecture, ICALP, 2014

2 Thanks to my advisors Prof. Ron RivestProf. Piotr IndykProf. Andy Yao Thanks for your mentorship and research advice, and early guidance on a path in theoretical computer science

3 and my amazing summer interns Arnab Bhattacharyya Jelani Nelson Huy Nguyen Marco Molinaro Yi Li Eric Price Grigory Yaroslavtsev

4 and my awesome collaborators! in the theory group at IBM and throughout the world…

5 My current research interests Communication Complexity Data Stream Algorithms and Lower Bounds Graph Algorithms Machine learning Numerical Linear Algebra Sketching Sparse Recovery

6 Talk Outline Data Stream Model and Sample Results –Distinct Elements –Frequency Moments –Characterization of Algorithms Connections to Other Areas –Compressed Sensing –Linear Algebra –Machine Learning

7 Data Streams A data stream is a sequence of data that is too large to be stored in available memory Examples –Internet search logs –Network Traffic –Financial transactions –Sensor networks –Scientific data streams (astronomical, genomics, physical simulations)…

8 Streaming Model Stream of elements a 1, …, a m each in {1, …, n} Single or small number of passes over the data Algorithms should work for any ordering of elements Almost all algorithms are randomized and approximate –Usually necessary to achieve efficiency –Randomness is in the algorithm, not the input Goals: minimize space complexity (in bits), processing time … 0113734

9 Vector Interpretation Stream: 8 2 1 9 1 9 2 4 4 9 4 2 5 4 2 5 8 5 2 5 Vector x: 1 2 3 4 5 6 7 8 9 Think of x as an n-dimensional vector –Initially, x = 0 n Insertion of i is interpreted as x i = x i + 1 Output an approximation to f(x) with high probability

10 (1) Distinct Elements Streaming model originated in work of Flajolet and Martin, ‘85 –Studied the distinct elements question –# of distinct elements, denoted F 0, is |{i | x i > 0}| –Output a number Z with F 0 · Z · (1+ε) F 0 with 99% probability –Can we do better than just storing all the coordinates of x? –Yes, and tight bounds are known [Indyk,W],[W],[Kane,Nelson,W]: £ (1/ ε 2 + log n) bits of space, O(1) processing time –Connections: to prove the tight lower bound, the gap-hamming communication problem was introduced

11 Gap-Hamming Problem x 2 {0,1} n y 2 {0,1} n Promise: Hamming distance satisfies Δ(x,y) > n/2 + εn or Δ(x,y) < n/2 - εn Lower bound of Ω(ε -2 ) for randomized 1-way communication [Indyk, W], [W], [Jayram, Kumar, Sivakumar] Same for 2-way communication [Chakrabarti, Regev] Applications: in information complexity, functional monitoring, embeddings, linear algebra, differential privacy, sparsifiers, … (Andoni, Brody, Clarkson, de Wolf, Jayram, Krauthgamer, McGregor, Mironov, Pitassi, Reingold, Sherstov, Talwar, Vadhan, Vidick, W, Zhang…)

12 (2) Frequency Moments Streaming model revived in work of Alon, Matias, and Szegedy, ’96 [AMS] Consider more general turnstile streaming model [coined by Muthukrishnan] –positive and negative updates, so x i = x i + 1 or x i = x i – 1 –summarize statistics of difference x-y of two streams of insertions

13 Frequency Moments [AMS] study frequency moments F p = sum i=1 n |x i | p, or equivalently l p -norms –Summarize skewness of an empirical distribution –F 2 used in computing self-join sizes, geometry and linear algebra –F 1 used for measuring distance between distributions, and in “robust” algorithms (regression, subspace approximation) FlatSkewed

14 Frequency Moments Output a number Z with F p · Z · (1+ε) F p with 99% probability Near-tight bounds known (Andoni, Bar-Yossef, Braverman, Chakrabarti, Coppersmith, Cormode, Ganguly, Gronemeier, Indyk, Jayram, Kane, Krauthgamer, Kumar, Li, Nelson, Porat, Sivakumar, Sun, W, …) Any guesses on how the space bounds depend on p?

15 Frequency Moments F 2 is the “breaking point” –F p for p · 2 doable in O~(1) bits of space –F p for p > 2 requires £ ~(n 1-2/p ) bits of space Algorithms achieve O~(1) processing times Connections: “sub-sampling + heavy hitters” technique for the upper bound –Used in many data stream, embedding, and linear algebra problems: earthmover distance, mixed norms, sampling in the turnstile model, compressed sensing, graph sparsifiers, regression –Optimally solves sum i=1 n G(x i ) problems [Braverman, Ostrovsky]

16 Subsampling + Heavy Hitters CountSketch [Charikar,Chen,Farach-Colton]: –Give each coordinate i a random ¾ (i) 2 {-1,1} –Randomly partition coordinates into B buckets, maintain c j = Σ i s.t. h(i) = j ¾ (i) ¢ x i in j-th bucket.Σ i s.t. h(i) = 2 ¾ (i) ¢ x i.. –Estimate x i as c h(i) x1x1 x2x2 x3x3 x4x4 x5x5 x6x6 x7x7 x8x8 x9x9 x 10 –Estimation error ¼ |x| 2 /B –Can be used to find the “heavy hitters” –It is a linear map x -> S ¢ x Easy to maintain under updates –Estimation error ¼ |x| 2 /B –Can be used to find the “heavy hitters” –It is a linear map x -> S ¢ x Easy to maintain under updates

17 Subsampling + Heavy Hitters Subsampling [Indyk,W]: –Create nested sequence of subsets of [n] –[n] = L log n ¶ L log n - 1 ¶ … ¶ L 0 –L i contains about 2 i random coordinates –Run CountSketch to find heavy hitters of each x L i –Estimate number of coordinates “at every scale” –Obtain a rough approximation x’ to x n 1/4 n 1/3 1 1 n 1/2  (n) Value Number of coordinates x 2 R n :

18 (3) Characterization of Turnstile Algorithms All known algorithms in the turnstile model have the form: 1. Choose a random matrix A independent of x 2. Maintain the “linear sketch” Ax in the stream 3. Output a function of Ax Question (?!): does the optimal algorithm for any function in the turnstile model have this form? [Li, Nguyen, W] Yes, up to a factor of log n in the space –Some caveats, e.g., can’t necessarily store A in low space –For lower bounds doesn’t matter, gives simpler proof strategy since just need to rule out linear sketches = A x Ax

19 Talk Outline Data Stream Model and Sample Results –Distinct Elements –Frequency Moments –Characterization of Algorithms Connections to Other Areas –Compressed Sensing –Linear Algebra –Machine Learning

20 Compressed Sensing Compute a sketch A ¢ x with a small number of rows (a.k.a. measurements) Output x’ which approximates x in the sense that |x’-x| p · (1+ε) |x-x k | q where x k is the best k-sparse approximation to x Similar to heavy hitters problem solved by CountSketch Variations of CountSketch + subsampling: Can design algorithms with near-optimal number of measurements as a function of various ε, k, p, q [Price, W] For p = q = 2, can reduce number of measurements by adaptively invoking CountSketch [Indyk, Price, W] x x2x2

21 Linear Algebra Least squares regression –Fitting points to a line, or more generally a subspace –min x |Ax-b| 2 for n x d matrix A, n x 1 vector b –Typically n >> d, i.e., the problem is over-constrained

22 Linear Algebra If S is a random projection matrix: –compute S*A and S*b, –solve min x |SAx-Sb| 2 –Intuition: randomly rotate the column span of [A ° b], then drop all but first O(d) coordinates (0, 0, 0, …, 0, 1) 2 R n After rotation approximately: (± 1/n 1/2, …, ± 1/n 1/2 ) Drop all but first d coordinates and rescale by (n/d) 1/2 (± 1/d 1/2, …, ± 1/d 1/2 ) 2 R d

23 Linear Algebra 1+ε approximation in O(nd log n) + poly(d/ε) time using Fast Johnson Lindenstrauss Transforms (restricted family of projections) If replace S with CountSketch, this still works! [Clarkson, W] –Leads to running time O(nnz(A)) + poly(d/ε) time, where nnz(A) is the number of non-zero entries of A Low Rank Approximation –Using CountSkech instead of Fast Johnson Lindenstrauss improves running time from O(nd log n) to O(nnz(A)) [Clarkson, W] Beautiful followup works by Li, Mahoney, Meng, Miller, Nelson, Nguyen, Peng

24 Machine Learning CountSketch can be used to estimate inner products –Estimate as –E[ ] = –Var[ ] · |x| 2 |y| 2 /B Replace expensive inner product computations in classification algorithms with approximations via CountSketch –perceptron and minimum enclosing ball [Clarkson, Hazan, W] Often interested in non-linear kernel transformations of input points: x 1, …, x n -> f(x 1 ), …, f(x n ) –“Tensor product” CountSketch of Pagh gives subspace embeddings of the polynomial kernel [Avron, Nguyen, W]

25 Conclusions Many data stream and sketching techniques give efficient ways of “compressing” big data – a broadly applicable goal in computer science –Compressed sensing, graph algorithms, linear algebra, machine learning… –Recently been looking at shape-fitting and clustering problems, etc. –Also useful for proving lower bounds in other areas, e.g., number of measurements in sparse recovery [DoBa,Indyk,Price,W] –I’m sure there are many other unexplored areas Thank you!


Download ppt "Data Streams and Applications in Computer Science David Woodruff IBM Almaden Presburger lecture, ICALP, 2014."

Similar presentations


Ads by Google