Breaking the O(n 2 ) Bit Barrier: Scalable Byzantine Agreement with an Adaptive Adversary Valerie King Jared Saia Univ. of VictoriaUniv. of New Mexico.

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

The Contest between Simplicity and Efficiency in Asynchronous Byzantine Agreement Allison Lewko The University of Texas at Austin TexPoint fonts used in.
A Survey of Key Management for Secure Group Communications Celia Li.
Agreement: Byzantine Generals UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department CS 739 Distributed Systems Andrea C. Arpaci-Dusseau Paper: “The.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Algorithm Design Techniques: Greedy Algorithms. Introduction Algorithm Design Techniques –Design of algorithms –Algorithms commonly used to solve problems.
QuickSort Average Case Analysis An Incompressibility Approach Brendan Lucier August 2, 2005.
Chapter 4: Trees Part II - AVL Tree
Gillat Kol (IAS) joint work with Ran Raz (Weizmann + IAS) Interactive Channel Capacity.
DISTRIBUTED SYSTEMS II FAULT-TOLERANT AGREEMENT Prof Philippas Tsigas Distributed Computing and Systems Research Group.
Eran Omri, Bar-Ilan University Joint work with Amos Beimel and Ilan Orlov, BGU Ilan Orlov…!??!!
NON-MALLEABLE EXTRACTORS AND SYMMETRIC KEY CRYPTOGRAPHY FROM WEAK SECRETS Yevgeniy Dodis and Daniel Wichs (NYU) STOC 2009.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
1 Complexity of Network Synchronization Raeda Naamnieh.
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
Sergio Rajsbaum 2006 Lecture 3 Introduction to Principles of Distributed Computing Sergio Rajsbaum Math Institute UNAM, Mexico.
Explorations in Anonymous Communication Andrew Bortz with Luis von Ahn Nick Hopper Aladdin Center, Carnegie Mellon University, 8/19/2003.
CPSC 668Set 10: Consensus with Byzantine Failures1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 6: Synchronous Byzantine.
Randomized Byzantine Agreements (Sam Toueg 1984).
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
Dynamic Hypercube Topology Stefan Schmid URAW 2005 Upper Rhine Algorithms Workshop University of Tübingen, Germany.
Randomized and Quantum Protocols in Distributed Computation Michael Ben-Or The Hebrew University Michael Rabin’s Birthday Celebration.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 6: Synchronous Byzantine.
Reaching Approximate Agreement in an Asynchronous Environment And what does it have to do with the Witness Protection Program.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
The Byzantine Generals Strike Again Danny Dolev. Introduction We’ll build on the LSP presentation. Prove a necessary and sufficient condition on the network.
P2P Course, Structured systems 1 Introduction (26/10/05)
Collecting Correlated Information from a Sensor Network Micah Adler University of Massachusetts, Amherst.
A DoS-Resilient Information System for Dynamic Data Management Stefan Schmid & Christian Scheideler Dept. of Computer Science University of Paderborn Matthias.
1 Adversary Search Ref: Chapter 5. 2 Games & A.I. Easy to measure success Easy to represent states Small number of operators Comparison against humans.
Distributed Computing 5. Synchronization Shmuel Zaks ©
PIC: Practical Internet Coordinates for Distance Estimation Manuel Costa joint work with Miguel Castro, Ant Rowstron, Peter Key Microsoft Research Cambridge.
1 SD-Rtree: A Scalable Distributed Rtree Witold Litwin & Cédric du Mouza & Philippe Rigaux.
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Distributed Coloring Discrete Mathematics and Algorithms Seminar Melih Onus November
ACSS 2006, T. Radzik1 Communication Algorithms for Ad-hoc Radio Networks Tomasz Radzik Kings Collage London.
Andreas Larsson, Philippas Tsigas SIROCCO Self-stabilizing (k,r)-Clustering in Clock Rate-limited Systems.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Set 11: Asynchronous Consensus 1.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Ch11 Distributed Agreement. Outline Distributed Agreement Adversaries Byzantine Agreement Impossibility of Consensus Randomized Distributed Agreement.
Mehdi Mohammadi March Western Michigan University Department of Computer Science CS Advanced Data Structure.
DISTRIBUTED SYSTEMS II FAULT-TOLERANT AGREEMENT Prof Philippas Tsigas Distributed Computing and Systems Research Group.
Foundations of Communication on Multiple-Access Channel Dariusz Kowalski.
Byzantine fault-tolerance COMP 413 Fall Overview Models –Synchronous vs. asynchronous systems –Byzantine failure model Secure storage with self-certifying.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 8 Instructor: Haifeng YU.
On the Communication Complexity of SFE with Long Output Daniel Wichs (Northeastern) joint work with Pavel Hubáček.
Rational Cryptography Some Recent Results Jonathan Katz University of Maryland.
Merkle trees Introduced by Ralph Merkle, 1979 An authentication scheme
Computer Science 1 TinySeRSync: Secure and Resilient Time Synchronization in Wireless Sensor Networks Speaker: Sangwon Hyun Acknowledgement: Slides were.
George Saad University of New Mexico Department of Computer Science.
Sliding window protocol The sender continues the send action without receiving the acknowledgements of at most w messages (w > 0), w is called the window.
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
PROACTIVE SECRET SHARING Or: How to Cope With Perpetual Leakage Herzberg et al. Presented by: Avinash Ravi Kevin Skapinetz.
Almost Entirely Correct Mixing With Applications to Voting Philippe Golle Dan Boneh Stanford University.
Towards a Scalable and Robust DHT Baruch Awerbuch Johns Hopkins University Christian Scheideler Technical University of Munich.
HYPERCUBE ALGORITHMS-1
Alternating Bit Protocol S R ABP is a link layer protocol. Works on FIFO channels only. Guarantees reliable message delivery with a 1-bit sequence number.
Sorting & Lower Bounds Jeff Edmonds York University COSC 3101 Lecture 5.
Robust Random Number Generation for Peer-to-Peer Systems Baruch Awerbuch Johns Hopkins University Christian Scheideler Technical University of Munich.
Round-Efficient Multi-Party Computation in Point-to-Point Networks Jonathan Katz Chiu-Yuen Koo University of Maryland.
Round-Efficient Broadcast Authentication Protocols for Fixed Topology Classes Haowen Chan, Adrian Perrig Carnegie Mellon University 1.
Randomized Algorithms for Distributed Agreement Problems Peter Robinson.
1 AGREEMENT PROTOCOLS. 2 Introduction Processes/Sites in distributed systems often compete as well as cooperate to achieve a common goal. Mutual Trust/agreement.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
reaching agreement in the presence of faults
Distributed Consensus
Fault-tolerant Consensus in Directed Networks Lewis Tseng Boston College Oct. 13, 2017 (joint work with Nitin H. Vaidya)
Efficient State Update for Key Management
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Presentation transcript:

Breaking the O(n 2 ) Bit Barrier: Scalable Byzantine Agreement with an Adaptive Adversary Valerie King Jared Saia Univ. of VictoriaUniv. of New Mexico Canada USA

Byzantine Agreement Each proc. starts with a bit; Goal: All procs. decide the same bit, which must match at least one of their initial bits. t= # of bad procs. controlled by malicious Adversary

Byzantine agreement for large scale networks If you could do it practically, you would! Why? Protecting against malicious attacks Organizing large communities of users Mediation in game theory Fundamental building block

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous w/ rushing adv. Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

Our Model Procs={1,2,…,n} Message passing: – A knows if it receives from B Synchronous Private random bits Private channels Adaptive adversary Resilience: t < n(1/3-  ) Limit on # bits sent by good procs.: Bad procs can send any #.

scalable Goal: Towards practical scalable BA Polylog bits sent per processor Polylog rounds

Impossibility Any BA (randomized) protocol which always uses o(n 2 ) messages in this model has Pr(failure) >0 (Implication of Dolev Reischuk)

Our results Theorem 1: (BA) For any consts. c, , there is a const. d and a (1/3-  )n resilient protocol which solves BA with prob. 1-1/n c using Õ(n 1/2 ) bits per processor in O(log d n) rounds

Also Theorem 2: (a.e.BA) For any consts. c, , there is a const. d and a (1/3-  )- resilient protocol which brings 1-O(1/log n) fraction of good procs to agreemt with prob. 1-1/n c using Õ(1) bits per proc. in O(log d n) rounds

Previous work An expected constant number of rounds suffice. ( Feldman and Micali 1988) All previously known protocols use all-to-all communication

KEY IDEA: The power of a short somewhat random stream S S= s 1 s 2 … s k be short stream of numbers. –Some a.e. global random numbers, some numbers fixed by an adversary which can see the preceding stream when choosing. - S can be generated w.h.p.

Talk outline I: Using S to get a.e. BA II Using S to go from a.e. BA to BA III Generating S

Rabin’s BA with Global Coin GC t<n/3 Set vote <-input bit. REPEAT clog n rounds Send-->all procs. Maj <- majority bit from others Fract <-fraction of votes for Maj If Fract > 2/3 agree on bit –then vote <-Maj Else if GC =1 – set vote <- 1; else set vote <- 0

Scalable a.e.BA with a.e.Global Coin GC t< n/3 -  Use averaging sampler to assign neighbors to procs =A deterministic way to have mostly good samples. Almost all neighbor sets contain a representative fraction of good procs Almost all good procs compute correct Maj for Fract> 2/3+  /2 -->

using S instead of GC -->a.e.BA whp For i=1,…,k, generate bit s i and run a.e. BA using s i for a.e.global coin It suffices that clog n bits of S are known a.e. and random

II: Using S to go from a.e. BA to BA Idea: Query random set of procs to ask bit. Since almost all good procs agree, majority should give correct answer. –Works if bad procs have communication bound But in our model, the adversary can flood all procs with queries!! Use s to decide which queries to answer.

II: Using S to go from a.e. BA to BA Labels= {1,..,n 1/2 } FOR each number s of S=Labels k : Each proc. p picks Õ(n 1/2 ) random queries and sends label to proc. q answers only if label= s (and not overloaded) if 2/3 majority of p’s queries with the same label are returned and agree on v, then p decides v. IT SUFFICES TO HAVE AN a.e. AGREED upon S with a RANDOM subsequence!

III Generating S

Sparse network Tree of robust supernodes of increasing size with links: procs in child ----> procs in parent node procs in parent node-->leaves of subtrees All procs. Supernodes and links generated using averaging samplers

Arrays of rand. #’s Each proc p i generates array A i of rand #’s and secret shares it with its leaf node. #’s in arrays are revealed as needed to elect which remaining parts of arrays will be passed on to parent node. A1A1 A2A2

Feige’s alg carried out in each node Each candidate picks a bin; winners=lightest bin’s contents >>> Requires agreement on all bin choices.

Elections of arrays in node We use scalable a.e. BA; bin numbers and S given by numbers from sequence of winning arrays of children. s1s1 s2s2

As array moves up, secret shares are split up among more procs on higher levels and erased from children so that adversary cannot learn a large fraction of arrays promoted to a higher level by taking over a small sets of processors on lower level.

Secrets are revealed as needed: by reversing and duplicating communication down every path, reassembling shares at every leaf of subtree. so that adversary cannot prevent secret from being exposed by blocking a single path.

Leaves are sampled (det.) by procs in subtree root to learn secret value

Generation of a short S Only a polylog number of arrays are left at each of the polylog children of the root. These form S When agreement on all of S is needed, a.e. BA can be run using supplemental bits.

Conclusions Uses of S: Easier to generate than a single random coinflip: –S can also be generated w.h.p scalably in the full information nonadaptive adversary model (whereas a single random coinflip can’t) A polylog size S has sufficient randomness to specify a set of n small quorums which are all good w.h.p (submitted to ICDCN) Useful in the asynch alg w/nonadaptive adv (SODA08)

Future work (cont’d) Asynchronous? Towards more practical scalable BA? Bounds on the communication of the bad procs makes the a.e. BA to BA easy. Likely this would simplify the a.e. BA protocol Other problems (SMPC, handling churn and larger name spaces) Other user models (selfish)

Questions?