Approximating the MST Weight in Sublinear Time

Slides:



Advertisements
Similar presentations
Great Theoretical Ideas in Computer Science
Advertisements

Bart Jansen 1.  Problem definition  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least k leaves?
Instructor Neelima Gupta Table of Contents Approximation Algorithms.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Minimum Spanning Trees Definition Two properties of MST’s Prim and Kruskal’s Algorithm –Proofs of correctness Boruvka’s algorithm Verifying an MST Randomized.
Combinatorial Algorithms
A Randomized Linear-Time Algorithm to Find Minimum Spanning Trees David R. Karger David R. Karger Philip N. Klein Philip N. Klein Robert E. Tarjan.
Chapter 3 The Greedy Method 3.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
Sublinear Algorithms for Approximating Graph Parameters Dana Ron Tel-Aviv University.
Michael Bender - SUNY Stony Brook Dana Ron - Tel Aviv University Testing Acyclicity of Directed Graphs in Sublinear Time.
Chapter 9 Graph algorithms Lec 21 Dec 1, Sample Graph Problems Path problems. Connectedness problems. Spanning tree problems.
1 Algorithmic Aspects in Property Testing of Dense Graphs Oded Goldreich – Weizmann Institute Dana Ron - Tel-Aviv University.
Lower Bounds for Property Testing Luca Trevisan U C Berkeley.
Sublinear Algorithms for Approximating Graph Parameters Dana Ron Tel-Aviv University.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Minimum Spanning Trees. Subgraph A graph G is a subgraph of graph H if –The vertices of G are a subset of the vertices of H, and –The edges of G are a.
Randomized Algorithms Morteza ZadiMoghaddam Amin Sayedi.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Nirmalya Roy School of Electrical Engineering and Computer Science Washington State University Cpt S 223 – Advanced Data Structures Graph Algorithms: Minimum.
Approximation Algorithms for Stochastic Combinatorial Optimization Part I: Multistage problems Anupam Gupta Carnegie Mellon University.
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Chapter 2 Graph Algorithms.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
 2004 SDU Lecture 7- Minimum Spanning Tree-- Extension 1.Properties of Minimum Spanning Tree 2.Secondary Minimum Spanning Tree 3.Bottleneck.
CSE332: Data Abstractions Lecture 24.5: Interlude on Intractability Dan Grossman Spring 2012.
Lecture 19 Greedy Algorithms Minimum Spanning Tree Problem.
Lower Bounds for Property Testing Luca Trevisan U.C. Berkeley.
Topics in Algorithms 2007 Ramesh Hariharan. Tree Embeddings.
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
Graphs Definition: a graph is an abstract representation of a set of objects where some pairs of the objects are connected by links. The interconnected.
Lecture 12 Algorithm Analysis Arne Kutzner Hanyang University / Seoul Korea.
CHAPTER SIX T HE P ROBABILISTIC M ETHOD M1 Zhang Cong 2011/Nov/28.
P & NP.
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
Lower Bounds for Property Testing
Randomized Min-Cut Algorithm
Introduction to Randomized Algorithms and the Probabilistic Method
New Characterizations in Turnstile Streams with Applications
Algorithm Analysis Fall 2017 CS 4306/03
Minimum Spanning Tree 8/7/2018 4:26 AM
Lectures on Network Flows
From dense to sparse and back again: On testing graph properties (and some properties of Oded)
COMP 6/4030 ALGORITHMS Prim’s Theorem 10/26/2000.
Lecture 12 Algorithm Analysis
Lecture 18: Uniformity Testing Monotonicity Testing
CS 4/527: Artificial Intelligence
Density Independent Algorithms for Sparsifying
Approximation Algorithms for TSP
CIS 700: “algorithms for Big Data”
Matrix Martingales in Randomized Numerical Linear Algebra
Sublinear Algorihms for Big Data
CSCI B609: “Foundations of Data Science”
Introduction Wireless Ad-Hoc Network
Md. Abul Kashem, Chowdhury Sharif Hasan, and Anupam Bhattacharjee
Lecture 12 Algorithm Analysis
EE5900 Advanced Embedded System For Smart Infrastructure
Lecture 6: Counting triangles Dynamic graphs & sampling
Introduction to Algorithms: Greedy Algorithms (and Graphs)
The Greedy Approach Young CS 530 Adv. Algo. Greedy.
Lecture 12 Algorithm Analysis
Winter 2019 Lecture 11 Minimum Spanning Trees (Part II)
Treewidth meets Planarity
Minimum Spanning Trees
Autumn 2019 Lecture 11 Minimum Spanning Trees (Part II)
Presentation transcript:

Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)

Sublinear Time Algorithms Make sense for problems on very large data sets Go contrary to common intuition that “an algorithm must be given at least enough time to read all the input” In most non-trivial cases are probabilistic In most non-trivial cases are approximate

Approximation For decision problems: the output is the correct answer either for the given input, or at least for some other input “close” to it. (Property Testing) For optimization problems: the output is a number that is close to the cost of the optimal solution for the given input. (There is not enough time to construct a solution)

Previous Examples The cost of the max cut in a graph with n nodes and cn2 edges can be approximated to within a factor e in time 2poly(1/ec). (Goldreich, Goldwasser, Ron) Other results for “dense” instances of optimization problems, for low-rank approximation of matrices, for metric spaces. . . No results (that we know of) for problems on bounded-degree graphs.

Our Result Given a connected weighted graph G, with maximum degree d and with weights in the range {1, . . . , w}, we can compute the weight of the minimum spanning tree of G to within a factor of e in time O(dwe-2log w/e); we also prove that it is necessary to look at W(dwe-2) entries in the representation of G. (We assume that G is represented using adjacency lists)

Algorithm

Main Intuition Suppose all weights are 1 or 2 Then the MST weight is equal to n – 2 + # of conn. comp. induced by weight-1 edges weight 1 weight 2 MST connected components Induced by weight-1 edges

Algorithm for weights in {1,2} To approximate the MST weight to within a multiplicative factor (1+e) it’s enough to approximate c1 to within an additive factor en (c1:= # of connected components induced by weight-1 edges) To approximate c1 we use ideas from Goldreich-Ron (property testing of connectivity) The algorithm runs in time O(de-2loge-1)

Approximating # of connected components Given a graph G of max degree d with n nodes we want to compute c, the number of connected components of G up to an additive error en. For every vertex u, define nu := 1 / size of component of u Then c = Su nu And if we call au:= max {nu, e } Then c = Su au en

Analysis Can estimate summation of au using sampling Once we pick a vertex u at random, the value au can be computed in time O(d/e) We need to pick O(1/e2) vertices, so we get running time O(d/e3)

Algorithm CC-APPROX(e) Repeat O(1/e2) times pick a random vertex v do a BFS from v, stopping after 2/e steps b:= 1 / number of visited vertices return (average of the values b) * n

Improved Algorithm Pick vertices at random as before, but then stop the BFS after 2k steps with prob. 2-k If the output is appropriately “scaled”, the average output is right The BFS takes on average of log 1/e steps instead of 1/e The variance is still low Improved algorithm runs in time O(de-2log 1/e)

General Weights Generalize argument for weight 1 and 2. Let ci = # of connected components induced by edges of weight at most i Then the MST weight is n – w + Si=1,. . ., w-1 ci

Final Algorithm For j=1,. . ., w-1, call algorithm to approximate # of connected component on the subgraph of G obtained by removing edges of cost >j Get ai, an approximation of ci Return n – w + Si=1,. . ., w-1 ai Average answer is within en/2 from cost of MST, and variance is bounded Total running time O(dwe-2log w/e)

Extensions Low average degree Non-integer weights

Lower Bound

Abstract sampling problem Define two binary distributions A,B such that Pr[A=1] = 1/w, Pr[A=0]=1-1/w Pr[B=1] = 1/w+ e/w, Pr[B=0]=1- 1/w- e/w Distinguishing A from B with constant probability requires W(w/e2) samples

Reduction We consider two distributions of weights over a cycle of length n In distribution G, for each edge we sample from A; if A=0 the edge gets weight 1, otherwise it gets weight w In distribution H, same with B H and G are likely to have MST costs that differ by about en To distinguish them we need to look at W(w/e2) edge weights

Higher Degree Sample from G or H as before, also add d-1 forward edges of weight w+1 from each vertex randomly permute names of vertices Now, on average, reading t edge weights gives us t/d samples from A or B, so t=W(dw/e2)

Conclusions A plausibility result showing that approximation for a standard graph problem in bounded degree (and sparse) graphs can be achieved in time independent of number of vertices Use of approximate cost without solution? More problems? Max SAT (work in progress) Something really useful?