C OMMUNICATION S TEPS F OR P ARALLEL Q UERY P ROCESSING Paraschos Koutris Paul Beame Dan Suciu University of Washington PODS 2013.

Slides:



Advertisements
Similar presentations
Optimal Space Lower Bounds for All Frequency Moments David Woodruff MIT
Advertisements

Lower Bounds for Local Search by Quantum Arguments Scott Aaronson (UC Berkeley) August 14, 2003.
Routing Complexity of Faulty Networks Omer Angel Itai Benjamini Eran Ofek Udi Wieder The Weizmann Institute of Science.
The Theory of Zeta Graphs with an Application to Random Networks Christopher Ré Stanford.
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
A Model of Computation for MapReduce
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
Combinatorial Algorithms
Noga Alon Institute for Advanced Study and Tel Aviv University
S KEW IN P ARALLEL Q UERY P ROCESSING Paraschos Koutris Paul Beame Dan Suciu University of Washington PODS 2014.
Jeffrey D. Ullman Stanford University. 2  Communication cost for a MapReduce job = the total number of key-value pairs generated by all the mappers.
On the Spread of Viruses on the Internet Noam Berger Joint work with C. Borgs, J.T. Chayes and A. Saberi.
LOCALITY IN DISTRIBUTED GRAPH ALGORITHMS Nathan Linial Presented by: Ron Ryvchin.
CS774. Markov Random Field : Theory and Application Lecture 06 Kyomin Jung KAIST Sep
Communication Cost in Parallel Query Processing
Approximation Algorithms
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Parallel Routing Bruce, Chiu-Wing Sham. Overview Background Routing in parallel computers Routing in hypercube network –Bit-fixing routing algorithm –Randomized.
Sublinear Algorithms for Approximating Graph Parameters Dana Ron Tel-Aviv University.
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
Flow Algorithms for Two Pipelined Filtering Problems Anne Condon, University of British Columbia Amol Deshpande, University of Maryland Lisa Hellerstein,
1 Streaming Computation of Combinatorial Objects Ziv Bar-Yossef U.C. Berkeley Omer Reingold AT&T Labs – Research Ronen.
Distributed Combinatorial Optimization
Jeffrey D. Ullman Stanford University. 2 Formal Definition Implementation Fault-Tolerance Example: Join.
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
A General Approach to Online Network Optimization Problems Seffi Naor Computer Science Dept. Technion Haifa, Israel Joint work: Noga Alon, Yossi Azar,
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 13 June 22, 2005
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
P ARALLEL S KYLINE Q UERIES Foto Afrati Paraschos Koutris Dan Suciu Jeffrey Ullman University of Washington.
1 The Theory of NP-Completeness 2012/11/6 P: the class of problems which can be solved by a deterministic polynomial algorithm. NP : the class of decision.
A D ICHOTOMY ON T HE C OMPLEXITY OF C ONSISTENT Q UERY A NSWERING FOR A TOMS W ITH S IMPLE K EYS Paris Koutris Dan Suciu University of Washington.
Distributed Coloring Discrete Mathematics and Algorithms Seminar Melih Onus November
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Q UERY -B ASED D ATA P RICING Paraschos Koutris Prasang Upadhyaya Magdalena Balazinska Bill Howe Dan Suciu University of Washington PODS 2012.
CSEP 521 Applied Algorithms Richard Anderson Lecture 10 NP Completeness.
NP-complete Problems SAT 3SAT Independent Set Hamiltonian Cycle
A NSWERING C ONJUNCTIVE Q UERIES W ITH I NEQUALITIES Paris Koutris 1 Tova Milo 2 Sudeepa Roy 1 Dan Suciu 1 ICDT University of Washington 2 Tel Aviv.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Data Streams Part 3: Approximate Query Evaluation Reynold Cheng 23 rd July, 2002.
PODC Distributed Computation of the Mode Fabian Kuhn Thomas Locher ETH Zurich, Switzerland Stefan Schmid TU Munich, Germany TexPoint fonts used in.
Foto Afrati — National Technical University of Athens Anish Das Sarma — Google Research Semih Salihoglu — Stanford University Jeff Ullman — Stanford University.
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Lecture 15- Parallel Databases (continued) Advanced Databases Masood Niazi Torshiz Islamic Azad University- Mashhad Branch
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Christoph Lenzen, STOC What is Load Balancing? work sharing low-congestion routing optimizing storage utilization hashing.
From Theory to Practice: Efficient Join Query Processing in a Parallel Database System Shumo Chu, Magdalena Balazinska and Dan Suciu Database Group, CSE,
Parallel Evaluation of Conjunctive Queries Paraschos Koutris and Dan Suciu University of Washington PODS 2011, Athens.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
The Message Passing Communication Model David Woodruff IBM Almaden.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
S IMILARITY E STIMATION T ECHNIQUES FROM R OUNDING A LGORITHMS Paper Review Jieun Lee Moses S. Charikar Princeton University Advanced Database.
Jeffrey D. Ullman Stanford University.  A real story from CS341 data-mining project class.  Students involved did a wonderful job, got an “A.”  But.
BAHIR DAR UNIVERSITY Institute of technology Faculty of Computing Department of information technology Msc program Distributed Database Article Review.
Upper and Lower Bounds on the cost of a Map-Reduce Computation
A simple parallel algorithm for the MIS problem
Managing Data at Scale Ke Yi and Dan Suciu Dagstuhl 2016.
Richard Anderson Lecture 26 NP-Completeness
Optimal Query Processing Meets Information Theory
MST in Log-Star Rounds of Congested Clique
Optimal Query Processing Meets Information Theory
Richard Anderson Lecture 25 NP-Completeness
Modelling and Searching Networks Lecture 5 – Random graphs
Clustering.
Discrete Mathematics and its Applications Lecture 5 – Random graphs
Switching Lemmas and Proof Complexity
§4 Computational Complexity
Presentation transcript:

C OMMUNICATION S TEPS F OR P ARALLEL Q UERY P ROCESSING Paraschos Koutris Paul Beame Dan Suciu University of Washington PODS 2013

M OTIVATION Understand the complexity of parallel query processing on big data Focus on shared-nothing architectures – MapReduce is such an example Dominating parameters of computation: – Communication cost – Number of communication rounds 2

C OMPUTATION M ODELS The MapReduce model – [Afrati et al., 2012] tradeoff between reducer size (input size of a reducer) and replication rate (in how many reducers a tuple is sent) The MUD model [Feldman et al., 2010] – (Massive, Unordered, Distributed) model The MRC model [Karloff et al., 2010] – MapReduce computation + load balancing 3

T HE MPC M ODEL N: total input size (in bits) p: number of servers – Servers have unlimited computational power Computation proceeds in synchronous rounds: – Local computation – Global communication 4 Input N Server 1 Server p Round 1Round 2...

MPC P ARAMETERS Each server receives in total a bounded number of bits: O(N/p × p ε ) 0 ≤ ε < 1 Complexity parameters: – Number of computation rounds r – Space exponent ε (governs data replication) 5 What are the space exponent/round tradeoffs for query processing in the MPC model ?

O UR R ESULTS ONE ROUND: – Lower bounds on the space exponent for any (randomized) algorithm that computes a Conjunctive Query – The lower bound holds for a class of inputs (matching databases), for which we show tight upper bounds MULTIPLE ROUNDS: – Almost tight space exponent/round tradeoffs for tree-like Conjunctive Queries under a weaker communication model 6

O UTLINE 1.Warm-up: The Triangle Query 2.One Communication Round 3.Multiple Communication Rounds 7

C ONJUNCTIVE Q UERIES We mainly study full Conjuctive Queries w/o self-joins: Q(x, y, z, w, v) = R(x,y,z), S(x,w,v), T(v,z) The hypergraph of the query Q: – Variables as vertices – Atoms as hyperedges 8 x y z v w R S T

T HE T RIANGLE Q UERY (1) Find all triangles Q(x,y,z) = R(x,y), S(y,z), T(z,x) 2-round Algorithm: – ROUND 1: [R hash-join S] R(a, b)  h(b) S(b, c)  h(b) Join locally U(a, b, c) = {R(a, b), S(b, c)} – ROUND 2: [T’ hash-join T] U(a, b, c)  h(c) T(c, a)  h(c) Join locally Q(a,b,c) = {U(a, b,c), T(c, a)} – Replication ε = 0 9

T HE T RIANGLE Q UERY (2) 1-round Algorithm: [Ganguly ’92, Afrati ’10, Suri ’11] – The p servers form a cube: [p 1/3 ] × [p 1/3 ] × [p 1/3 ] – Send each tuple to servers: R(a, b)  (h 1 (a), h 2 (b), - ) S(b, c)  (-, h 2 (b), h 3 (c) ) each tuple replicated p 1/3 times T(c, a)  (h 1 (a), -, h 3 (c) ) Replication ε = 1/3 10 (h 1 (a), h 2 (b), h 3 (c))

L OWER B OUND F OR T RIANGLES (1) Say that R, S, T are random permutations over [n] 2 Expected #triangles = 1 Lemma: For any deterministic algorithm and ε=0, the p servers report in expectation O(1/p 1/2 ) tuples Each relation contains N = (n logn) bits of information Any server knows a 1/p fraction of input: N/p bits 11 Theorem: No (randomized) algorithm can compute triangles in one round with space exponent ε < 1/3

L OWER B OUND F OR T RIANGLES (2) a xy = Pr[server knows tuple R(x,y)] a xy ≤ 1/n ∑ x,y a xy = O(n/p) Similarly for S(y,z), T(z,x): b yz, c zx Friedgut’s inequality: ∑ x,y,z a xy b yz c zx ≤ (∑ x,y a xy 2 ) 1/2 (∑ y,z b yz 2 ) 1/2 (∑ z,x c zx 2 ) ½ #know-triangles = O(1/p 3/2 ) Summing over all servers, O(1/p 1/2 ) known output tuples 12

O UTLINE 1.Warm-up: The Triangle Query 2.One Communication Round 3.Multiple Communication Rounds 13

M ATCHING D ATABASES Every relation R(A 1, …, A k ) contains exactly n tuples Every attribute A i contains each value in {1, …, n} only once A matching database has no skew n-1 n … … Relation R(X, Y, Z) n … XYZ n-1nn ………

F RACTIONAL V ERTEX C OVER Vertex cover number τ: minimum number of variables that cover every hyperedge Fractional vertex cover number τ*: minimum weight of variables such that each hyperedge has weight at least 1 15 Vertex Cover τ = 2 1/2 Fractional Vertex Cover τ* = 3/2 0 0 x y z v w Q(x, y, z, w, v) = R(x,y,z), S(x,w,v), T(v,z)

L OWER B OUNDS 16 Theorem: Any randomized algorithm in the MPC model will fail to compute a Conjunctive Query Q with: 1 round ε < 1 – 1/ τ*(Q) Input a matching database

U PPER B OUNDS 17 Theorem: The HYPERCUBE (randomized) algorithm can compute any Conjunctive Query Q with: 1 round ε ≥ 1 – 1/ τ*(Q) Input a matching database (no skew) Exponentially small probability of failure (on input N)

H YPER C UBE A LGORITHM Q(x 1,…, x k ) = S 1 (…), …, S l (…) Compute τ* and minimum cover: v 1, v 2, …, v k Assign to each variable x i a share exponent e(i) = v i / τ* Assign each of the p servers to points on a k-dimensional hypercube: [p] = [p e(1) ] × … × [p e(k) ] Hash each tuple to the appropriate subcube 18 Q(x,y,z,w,v)=R(x,y,z),S(x,w,v),T(v,z) τ* = 3/2 : v x = v v = v z = ½ v y = v w = 0 e(x) = e(v) = e(z) = 1/3 e(y) = e(w) = 0 [p] =[p 1/3 ]×[p 0 ]×[p 1/3 ]×[p 0 ]×[p 1/3 ] e.g. S(a,b,c)  (h x (a), 1, -, 1, h v (c))

E XAMPLES 19 Cycle query: C k (x 1,…, x k ) = S 1 (x 1, x 2 ), …, S k (x k, x 1 ) – τ* = k/2 – ε = 1 - 2/k Star query: T k (z, x 1,…, x k ) = S 1 (z, x 1 ), …, S k (z, x k ) – τ* = 1 – ε = 0 Line query: L k (x 0, x 1,…, x k ) = S 1 (x 0, x 1 ), …, S k (x k-1, x k ) – τ* =  k/2  – ε = 1 - 1/  k/2 

O UTLINE 1.Warmup: The Triangle Query 2.One Communication Round 3.Multiple Communication Rounds 20

M ULTIPLE R OUNDS 21 Our results apply to a weaker model (tuple-based MPC): – only join tuples can be sent in rounds > 1 e.g. {R(a,b), S(b,c)} – routing of each tuple t depends only on t Theorem: For every tree-like query Q, any tuple-based MPC algorithm requires at least log(diam(Q)) / log  2/(1-ε)  rounds This bound agrees with the upper bound within 1 round diam(Q): largest distance between two vertices in the hypergraph tree-like queries: #variables + #atoms - Σ(arities) = 1

E XAMPLE 22 Line query: L k (x 0,x 1,…, x k ) = S 1 (x 0,x 1 ), …, S k (x k-1,x k ) – tree-like: #variables = k+1 #atoms = k Σ(arities) = 2k – diam(L k ) = k – For space exponent ε = 0, we need at least log(k)/log(2/(1-0)) = log(k) rounds x1x1 x2x2 x3x3 x4x4 … xkxk diameter = k

C ONNECTED C OMPONENTS 23 As a corollary of our results for multiple rounds, we obtain lower bounds beyond conjunctive queries: Theorem: Any tuple-based MPC algorithm that computes the Connected Components in any undirected graph with space exponent any ε<1 requires requires Ω(log p) communication rounds

C ONCLUSIONS Tight lower and upper bounds for one communication round in the MPC model The first lower bounds for multiple communication rounds Connected components cannot be computed in a constant number of rounds Open Problems: – Lower and upper bounds for skewed data – Lower bounds for > 1 rounds in the general model 24

Thank you ! 25