Randomized Smoothing Networks

Slides:



Advertisements
Similar presentations
Randomness Conductors Expander Graphs Randomness Extractors Condensers Universal Hash Functions
Advertisements

Why Simple Hash Functions Work : Exploiting the Entropy in a Data Stream Michael Mitzenmacher Salil Vadhan And improvements with Kai-Min Chung.
Scalable and Dynamic Quorum Systems Moni Naor & Udi Wieder The Weizmann Institute of Science.
PERMUTATION CIRCUITS Presented by Wooyoung Kim, 1/28/2009 CSc 8530 Parallel Algorithms, Spring 2009 Dr. Sushil K. Prasad.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Theory of Computing Lecture 3 MAS 714 Hartmut Klauck.
Algorithms Recurrences. Definition – a recurrence is an equation or inequality that describes a function in terms of its value on smaller inputs Example.
Spring 2015 Lecture 5: QuickSort & Selection
Advanced Topics in Algorithms and Data Structures Page 1 Parallel merging through partitioning The partitioning strategy consists of: Breaking up the given.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
DISTRIBUTION OF THE SAMPLE MEAN
Dynamic Hypercube Topology Stefan Schmid URAW 2005 Upper Rhine Algorithms Workshop University of Tübingen, Germany.
Parallel Merging Advanced Algorithms & Data Structures Lecture Theme 15 Prof. Dr. Th. Ottmann Summer Semester 2006.
Tirgul 8 Universal Hashing Remarks on Programming Exercise 1 Solution to question 2 in theoretical homework 2.
Tolerating Faults in Counting Networks Marc D. Riedel Jehoshua Bruck California Institute of Technology Parallel and Distributed.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CS294-6 Reconfigurable Computing Day 14 October 7/8, 1998 Computing with Lookup Tables.
Sorting Networks Uri Zwick Tel Aviv University May 2015.
Finding a maximum independent set in a sparse random graph Uriel Feige and Eran Ofek.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
COMPLEXITY SCIENCE WORKSHOP 18, 19 June 2015 Systems & Control Research Centre School of Mathematics, Computer Science and Engineering CITY UNIVERSITY.
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
Logic Circuits Chapter 2. Overview  Many important functions computed with straight-line programs No loops nor branches Conveniently described with circuits.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
SE: CHAPTER 7 Writing The Program
Comparison Networks Sorting Sorting binary values
Probability Theory Overview and Analysis of Randomized Algorithms Prepared by John Reif, Ph.D. Analysis of Algorithms.
An Efficient Decentralized Algorithm for the Distributed Trigger Counting (DTC) Problem Venkatesan T. Chakravarthy (IBM Research-India) Anamitra Roy Choudhury.
Conformance Test Experiments for Distributed Real-Time Systems Rachel Cardell-Oliver Complex Systems Group Department of Computer Science & Software Engineering.
Self Stabilizing Smoothing and Counting Maurice Herlihy, Brown University Srikanta Tirthapura, Iowa State University.
Lecture 4 Sorting Networks. Comparator comparator.
Data Structures and Algorithms in Parallel Computing Lecture 2.
Local Distributed Agent Matchmaking Elth Ogston and Stamatis Vassiliadis Computer Engineering Laboratory TU Delft.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 467 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 3524 Course.
Counting and Distributed Coordination BASED ON CHAPTER 12 IN «THE ART OF MULTIPROCESSOR PROGRAMMING» LECTURE BY: SAMUEL AMAR.
1 Sorting Networks Sorting.
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
NOTE: To change the image on this slide, select the picture and delete it. Then click the Pictures icon in the placeholder to insert your own image. Fast.
1 Concurrent Counting is harder than Queuing Costas Busch Rensselaer Polytechnic Intitute Srikanta Tirthapura Iowa State University.
ICDCS 05Adaptive Counting Networks Srikanta Tirthapura Elec. And Computer Engg. Iowa State University.
Draft-deoliveira-diff-te-preemption-02.txt J. C. de Oliveira, JP Vasseur, L. Chen, C. Scoglio Updates: –Co-author: JP Vasseur –New preemption criterion.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Andrey Markov, public domain image
Computers’ Basic Organization
Algorithms for Big Data: Streaming and Sublinear Time Algorithms
Lecture 4 Sorting Networks
Oliver Schulte Machine Learning 726
Algorithm Design and Analysis (ADA)
Game Theory Just last week:
Probability Theory Overview and Analysis of Randomized Algorithms
Auburn University COMP8330/7330/7336 Advanced Parallel and Distributed Computing Interconnection Networks (Part 2) Dr.
Introduction to Algorithms
Randomized Algorithms
SKIP GRAPHS James Aspnes Gauri Shah SODA 2003.
Analysis of Link Reversal Routing Algorithms
Some Rules for Expectation
Randomized Algorithms CS648
Asymptotic Notations Algorithms Lecture 9.
Linear sketching with parities
The Curve Merger (Dvir & Widgerson, 2008)
Delta Network The delta network is one example of a multistage interconnection network that can be used as a switch fabric The delta network is an example.
Range-Efficient Computation of F0 over Massive Data Streams
Lecture 3 Sorting and Selection
Adaptive Cooperative Systems Chapter 6 Markov Random Fields
Pools, Counters, and Parallelism
The Zig-Zag Product and Expansion Close to the Degree
Compact routing schemes with improved stretch
Shared Counters and Parallelism
The Selection Problem.
Asynchronous token routing device
Presentation transcript:

Randomized Smoothing Networks Maurice Herlihy, Brown University Srikanta Tirthapura, Iowa State University

Distributed Load Balancing Network Example: Routing Tasks to Processors on a Grid Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 In a k-smoothing Network, the numbers of Tokens on different output wires differ by at most k Randomized Smoothing Networks IPDPS 2004

How to Build Smoothing Data Structures? Centralized Solution Random routing Distributed and local The smoothness can diverge without bound as more tokens enter the network Balancing Networks Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Overview of Talk Describe Smoothing Networks built out of balancers Our Result: Randomized Smoothing Networks better than Deterministic Ones Analysis of the Randomized Butterfly (or Block) network Randomized Smoothing Networks IPDPS 2004

Basic Component: Balancer Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Balancer Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Balancer Randomized Smoothing Networks IPDPS 2004

Smoothing Networks: Scalable Distributed Data Structures Bitonic[2] Bitonic[4] Randomized Smoothing Networks IPDPS 2004

Prior Work: Deterministic Smoothing Networks The Network Balancers are Initialized in a very specific way (Usually) All balancers are pointing up depth = 3, width =4 Randomized Smoothing Networks IPDPS 2004

Our Work: Randomized Smoothing Networks Random balancers –initialized as follows With prob = ½, up With prob = ½, down Resulting Initial Network State also random How good are the smoothing properties of randomized networks? randomly initialized balancers Randomized Smoothing Networks IPDPS 2004

Benefits of Randomized Networks Easy Initialization Each balancer sets itself to a random state Completely Local Actions Easy to recover from faults Global reconfiguration unnecessary Better Smoothing Properties Randomized Smoothing Networks IPDPS 2004

Butterfly Network: Inductive Construction Width = 2 Width = 8 Width = 4 Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Our Main Result Many Randomized Networks produce much smoother outputs than Regular Deterministically Initialized Networks For a Butterfly network of width w: The output of a randomized network has smoothness with high probability For any deterministic initialization, worst case output smoothness is no better than O(log w) Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Implications nearly constant for practical purposes If w =10^9, this is 14 Smoothness independent of the number of tokens entering the network Network is very simple and easy to construct No further constants hidden in O(…) Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Previous Work Aspnes Herlihy and Shavit, 1991 Bitonic and Periodic Counting networks (which are also 1-smoothing) Isomorphic to sorting networks Width w, depth Klugerman and Plaxton, 1992 Better depth constructions of counting networks Constants are very high, not practical Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Previous Work (2) Aiello, Venkatesan and Yung, 1993 Counting and Smoothing Networks using random and deterministic balancers An alternate definition of a randomized balancer Our results apply to their definition of random balancers also In contrast, we use only randomized balancers Herlihy and Tirthapura, 2003 Worst case smoothness of deterministic Butterfly, Periodic and Bitonic networks Matching upper and lower bounds Self-stabilizing constructions Randomized Smoothing Networks IPDPS 2004

Analysis: Random balancer a, b = numbers of tokens entering the balancer c, d = numbers of tokens exiting the balancer a c r = + ½ with probability ½ = - ½ with probability ½ c = (a+b)/2 + D(a+b) r d = (a+b)/2 – D(a+b) r b d D = odd characteristic function Randomized Smoothing Networks IPDPS 2004

Analysis (2): Butterfly[8] y1 = f(x1,x2,x3,..x8, r1,r2,………,r7) E[y1] = (x1+x2+…+x8)/8 y1 depends only on the balancer decisions r1…r7 and on x1..x8 Width = 8 x1 x2 x3 x8 y1 y2 y3 y8 x4 x5 x6 x7 y4 y5 y6 y7 r1 r5 r7 r2 r3 r6 r4 Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Analysis (3) Random variable y1 is hard to work with directly Define Alternate Random Variable z1: z1 = (r1+r2+r3+r4)/4 + (r5+r6)/2 + r7 + (x1+…+x8)/8 x1 x2 x3 x8 y1 y2 y3 y8 x4 x5 x6 x7 y4 y5 y6 y7 r1 r2 r3 r4 r6 r5 r7 Width = 8 Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Analysis (4) Random Variable z1 is easier to handle, since it is a linear function of r1, r2, … r7 Use Hoeffding’s inequality to bound the tail of z1, and hence y1 Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Theorem The output of Butterfly[w] with random balancers is smooth with probability at least (1-4/w) , irrespective of the input sequence Contrast: For any deterministic initialization, there exist inputs which lead to (log w) smooth outputs Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Conclusions Simple network (butterfly) with small depth and very good smoothness Easy, local reconfiguration after faults Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004 Open Questions Do there exist simple, log-depth networks with constant output smoothness (with high probability)? Can the upper bound on the butterfly network be improved? Matching Lower bounds? Smoothing networks for tokens of unequal weights? Randomized Smoothing Networks IPDPS 2004

Randomized Smoothing Networks IPDPS 2004