Achieving Byzantine Agreement and Broadcast against Rational Adversaries Adam Groce Aishwarya Thiruvengadam Ateeq Sharfuddin CMSC 858F: Algorithmic Game.

Slides:



Advertisements
Similar presentations
Fair Computation with Rational Players Adam Groce and Jonathan Katz University of Maryland.
Advertisements

Multi-Party Contract Signing Sam Hasinoff April 9, 2001.
The Contest between Simplicity and Efficiency in Asynchronous Byzantine Agreement Allison Lewko The University of Texas at Austin TexPoint fonts used in.
Fault Tolerance. Basic System Concept Basic Definitions Failure: deviation of a system from behaviour described in its specification. Error: part of.
Reaching Agreements II. 2 What utility does a deal give an agent? Given encounter  T 1,T 2  in task domain  T,{1,2},c  We define the utility of a.
Cryptography and Game Theory: Designing Protocols for Exchanging Information Gillat Kol and Moni Naor.
Secure Multiparty Computations on Bitcoin
+ The Byzantine Generals Problem Leslie Lamport, Robert Shostak and Marshall Pease Presenter: Jose Calvo-Villagran
Byzantine Generals. Outline r Byzantine generals problem.
The Byzantine Generals Problem Leslie Lamport, Robert Shostak and Marshall Pease Presenter: Phyo Thiha Date: 4/1/2008.
Agreement: Byzantine Generals UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department CS 739 Distributed Systems Andrea C. Arpaci-Dusseau Paper: “The.
The Byzantine Generals Problem Boon Thau Loo CS294-4.
The Byzantine Generals Problem Leslie Lamport, Robert Shostak, Marshall Pease Distributed Algorithms A1 Presented by: Anna Bendersky.
Prepared by Ilya Kolchinsky.  n generals, communicating through messengers  some of the generals (up to m) might be traitors  all loyal generals should.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Byzantine Fault Tolerance Steve Ko Computer Sciences and Engineering University at Buffalo.
Eran Omri, Bar-Ilan University Joint work with Amos Beimel and Ilan Orlov, BGU Ilan Orlov…!??!!
Consensus problem Agreement. All processes that decide choose the same value. Termination. All non-faulty processes eventually decide. Validity. The common.
Improving the Round Complexity of VSS in Point-to-Point Networks Jonathan Katz (University of Maryland) Chiu-Yuen Koo (Google Labs) Ranjit Kumaresan (University.
Byzantine Generals Problem: Solution using signed messages.
Byzantine Generals Problem Anthony Soo Kaim Ryan Chu Stephen Wu.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
CMSC 414 Computer and Network Security Lecture 6 Jonathan Katz.
Copyright 2006 Koren & Krishna ECE655/ByzGen.1 UNIVERSITY OF MASSACHUSETTS Dept. of Electrical & Computer Engineering Fault Tolerant Computing ECE 655.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 6: Synchronous Byzantine.
EEC 693/793 Special Topics in Electrical Engineering Secure and Dependable Computing Lecture 15 Wenbing Zhao Department of Electrical and Computer Engineering.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
A Look at Byzantine Generals Problem R J Walters.
Randomized and Quantum Protocols in Distributed Computation Michael Ben-Or The Hebrew University Michael Rabin’s Birthday Celebration.
Optimistic Synchronous Multi-Party Contract Signing N. Asokan, Baum-Waidner, M. Schunter, M. Waidner Presented By Uday Nayak Advisor: Chris Lynch.
CMSC 414 Computer and Network Security Lecture 6 Jonathan Katz.
Reaching Approximate Agreement in an Asynchronous Environment And what does it have to do with the Witness Protection Program.
DANSS Colloquium By Prof. Danny Dolev Presented by Rica Gonen
The Byzantine Generals Problem L. Lamport R. Shostak M. Pease Presented by: Emmanuel Grumbach Raphael Unglik January 2004.
Distributed Algorithms: Agreement Protocols. Problems of Agreement l A set of processes need to agree on a value (decision), after one or more processes.
On Everlasting Security in the Hybrid Bounded Storage Model Danny Harnik Moni Naor.
The Byzantine Generals Problem Leslie Lamport Robert Shostak Marshall Pease.
Adaptively Secure Broadcast, Revisited
Distributed Consensus Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit.
1 Privacy-Preserving Distributed Information Sharing Nan Zhang and Wei Zhao Texas A&M University, USA.
Securing Every Bit: Authenticated Broadcast in Wireless Networks Dan Alistarh, Seth Gilbert, Rachid Guerraoui, Zarko Milosevic, and Calvin Newport.
Lecture #12 Distributed Algorithms (I) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
Ch11 Distributed Agreement. Outline Distributed Agreement Adversaries Byzantine Agreement Impossibility of Consensus Randomized Distributed Agreement.
Practical Byzantine Fault Tolerance Jayesh V. Salvi
1 Chapter 12 Consensus ( Fault Tolerance). 2 Reliable Systems Distributed processing creates faster systems by exploiting parallelism but also improve.
Byzantine Fault Tolerance in Stateful Web Service Yilei ZHANG 30/10/2009.
1 Resilience by Distributed Consensus : Byzantine Generals Problem Adapted from various sources by: T. K. Prasad, Professor Kno.e.sis : Ohio Center of.
Rational Cryptography Some Recent Results Jonathan Katz University of Maryland.
The Byzantine General Problem Leslie Lamport, Robert Shostak, Marshall Pease.SRI International presented by Muyuan Wang.
CS 425/ECE 428/CSE424 Distributed Systems (Fall 2009) Lecture 9 Consensus I Section Klara Nahrstedt.
CSE 60641: Operating Systems Implementing Fault-Tolerant Services Using the State Machine Approach: a tutorial Fred B. Schneider, ACM Computing Surveys.
Hwajung Lee. Reaching agreement is a fundamental problem in distributed computing. Some examples are Leader election / Mutual Exclusion Commit or Abort.
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
UNIVERSITY of WISCONSIN-MADISON Computer Sciences Department
Utility Dependence in Correct and Fair Rational Secret Sharing Gilad Asharov Yehuda Lindell Bar-Ilan University, Israel.
1 Fault-Tolerant Consensus. 2 Communication Model Complete graph Synchronous, network.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Byzantine Fault Tolerance Steve Ko Computer Sciences and Engineering University at Buffalo.
1 AGREEMENT PROTOCOLS. 2 Introduction Processes/Sites in distributed systems often compete as well as cooperate to achieve a common goal. Mutual Trust/agreement.
reaching agreement in the presence of faults
Synchronizing Processes
COMP28112 – Lecture 14 Byzantine fault tolerance: dealing with arbitrary failures The Byzantine Generals’ problem (Byzantine Agreement) 13-Oct-18 COMP28112.
Outline Distributed Mutual Exclusion Distributed Deadlock Detection
COMP28112 – Lecture 13 Byzantine fault tolerance: dealing with arbitrary failures The Byzantine Generals’ problem (Byzantine Agreement) 19-Nov-18 COMP28112.
Distributed Consensus
Agreement Protocols CS60002: Distributed Systems
Distributed Consensus
Byzantine Faults definition and problem statement impossibility
Consensus in Synchronous Systems: Byzantine Generals Problem
The Byzantine Generals Problem
COMP28112 – Lecture 13 Byzantine fault tolerance: dealing with arbitrary failures The Byzantine Generals’ problem (Byzantine Agreement) 22-Feb-19 COMP28112.
Byzantine Generals Problem
Presentation transcript:

Achieving Byzantine Agreement and Broadcast against Rational Adversaries Adam Groce Aishwarya Thiruvengadam Ateeq Sharfuddin CMSC 858F: Algorithmic Game Theory University of Maryland, College Park

Overview  Byzantine agreement and broadcast are central primitives in distributed computing and cryptography.  The original paper proves (LPS1982) that successful protocols can only be achieved if the number of adversaries is 1/3 rd the number of players.

Our Work  We take a game-theoretic approach to this problem  We analyze rational adversaries with preferences on outputs of the honest players.  We define utilities rational adversaries might have.  We then show that with these utilities, Byzantine agreement is possible with less than half the players being adversaries  We also show that broadcast is possible for any number of adversaries with these utilities.

The Byzantine Generals Problem  Introduced in 1980/1982 by Leslie Lamport, Robert Shostak, and Marshall Pease.  Originally used to describe distributed computation among fallible processors in an abstract manner.  Has been applied to fields requiring fault tolerance or with adversaries.

General Idea  The n generals of the Byzantine empire have encircled an enemy city.  The generals are far away from each other necessitating messengers to be used for communication.  The generals must agree upon a common plan (to attack or to retreat).

General Idea (cont.)  Up to t generals may be traitors.  If all good generals agree upon the same plan, the plan will succeed.  The traitors may mislead the generals into disagreement.  The generals do not know who the traitors are.

General Idea (cont.)  A protocol is a Byzantine Agreement (BA) protocol tolerating t traitors if the following conditions hold for any adversary controlling at most t traitors: A.All loyal generals act upon the same plan of action. B.If all loyal generals favor the same plan, that plan is adopted.

General Idea (broadcast)  Assume general i acting as the commanding general, and sending his order to the remaining n-1 lieutenant generals.  A protocol is a broadcast protocol tolerating t traitors if these two conditions hold: 1.All loyal lieutenants obey the same order. 2.If the commanding general is loyal, then every loyal lieutenant general obeys the order he sends.

Impossibility  Shown impossible for t ≥ n/3  Consider n = 3, t = 1; Sender is malicious; all initialized with 1; from A’s perspective: Sender 1 B1B1 A1A A does not know if Sender or B is honest. I’m Spartacus!

Equivalence of BA and Broadcast  Given a protocol for broadcast, we can construct a protocol for Byzantine agreement. 1.All players use the broadcast protocol to send their input to every other player. 2.All players output the value they receive from a majority of the other players.

Equivalence of BA and Broadcast  Given a protocol for Byzantine agreement, we can construct a protocol for broadcast. 1.The sender sends his message to all other players. 2.All players use the message they received in step 1 as input in a Byzantine agreement protocol. 3.Players output whatever output is given by the Byzantine agreement protocol.

Previous Works  It was shown in PSL1980 that algorithms can be devised to guarantee broadcast/Byzantine agreement if and only if n ≥ 3t+1.  If traitors cannot falsely report messages (for example, if a digital signature scheme exists), it can be achieved for n ≥ t ≥ 0.  PSL1980 and LSP1982 demonstrated an exponential communication algorithm for reaching BA in t+1 rounds for t < n/3.

Previous Works  Dolev, et al. presented a 2t+3 round BA with polynomially bounded communication for any t < n/3.  Probabilistic BA protocols tolerating t < n/3 have been shown running in expected time O (t / log n); though running time is high in worst case.  Faster algorithms tolerating (n - 1)/3 faults have been shown if both cryptography and trusted parties are used to initialize the network.

Rational adversaries  All results stated so far assume general, malicious adversaries.  Several cryptographic problems have been studied with rational adversaries Have known preferences on protocol output Will only break the protocol if it benefits them  In MPC and secret sharing, rational adversaries allow stronger results  We apply rational adversaries to Byzantine agreement and broadcast

Definition: Security against rational adversaries A protocol for BA or broadcast is secure against rational adversaries with a particular utility function if for any adversary A 1 against which the protocol does not satisfy the security conditions there is another adversary A 2 such that: When the protocol is run with A 2 as the adversary, all security conditions are satisfied. The utility achieved by A 2 is greater than that achieved by A 1.

Definition: Security against rational adversaries (cont.)  This definition requires that, for the adversary, following the protocol strictly dominates all strategies resulting in security violations.  Guarantees that there is an incentive not to break the security.  We do not require honest execution to strictly dominate other strategies.

Utility Definitions  Meaning of “secure against rational adversaries” is dependent on preferences that adversaries have. Some utility functions make adversary- controlled players similar to honest players, with incentives to break protocol only in specific circumstances. Other utility functions define adversaries as strictly malicious.  We present several utility definitions that are natural and reasonable.

Utility Definitions (cont.)  We limit to protocols that attempt to broadcast or agree in a single bit.  The output can be one of the following: All honest players output 1. All honest players output 0. Honest players disagree on output.

Utility Definitions (cont.)  We assume that the adversary knows the inputs of the honest players.  Therefore, the adversary can choose a strategy to maximize utility for that particular input set.

Utility Definitions (cont.)  Both protocol and adversary can act probabilistically.  So, an adversary’s choice of strategy results not in a single outcome but in a probability distribution over possible outcomes.  We establish a preference ordering on probability distributions.

Strict Preferences  An adversary with “strict preferences” is one that will maximize the likelihood of its first-choice outcome. For a particular strategy, let a 1 be the probability of the adversary achieving its first- choice outcome and a 2 be the probability of achieving the second choice outcome. Let b 1 and b 2 be the same probabilities for a second potential strategy. We say that the first strategy is preferred if and only if a 1 > b 1 or a 1 =b 1 and a 2 > b 2.

Strict Preferences (cont.)  Not a “utility” function in the classic sense  Provides a good model for a very single-minded adversary.

Strict Preferences (cont.)  We will use shorthand to refer to the ordering of outcomes: For example: 0s > disagreement > 1s. Denotes an adversary who prefers that all honest players output 0, whose second choice is disagreement, and last choice is all honest players output 1.

Linear Utility  An adversary with “linear utilities” has its utilities defined by: Utility =u 1 Pr[players output 0] + u 2 Pr[players output 1] + u 3 Pr [players disagree] Where u 1 + u 2 + u 3 = 1.

Definition (0-preferring)  A 0-preferring adversary is one for which Utility = E[number of honest players outputting 0]. Not a refinement of the strict ordering adversary with a preference list of 0s > disagree > 1.

Other possible utility definitions  The utility definitions do not cover all preference orderings.  With n players, there is 2 n output combinations.  There are an infinite number of probability distributions on those outcomes.  Any well-ordering on these probability distributions is a valid set of preferences against which security could be guaranteed.

Other possible utility definitions  Our utility definitions preserve symmetry of players, but this is not necessary.  It is also possible that the adversary’s output preferences are a function of the input.  The adversary could be risk-averse  Adversarial players might not all be centrally controlled. Could have conflicting preferences

Equivalence of Broadcast to BA, revisited  Reductions with malicious adversaries don’t always apply Building BA from broadcast fails Building broadcast from BA succeeds

Strict preferences case  Assume a preference ordering of 0 > 1 > disagree.  t < n/2  Protocol: Each player sends his input to everyone. Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0.

Strict preferences case (cont.)  Proof: If all honest players held the same input, the protocol terminates with the honest players agreeing despite what the adversary says. If the honest players do not form a majority, it is in adversarial interest to send 0’s.

Generalizing the proof  Same protocol: Each player sends his input to everyone. Each player outputs the majority of all inputs he has received. If there is no strict majority, output 0.  Assume any preference set with all- zero output as first choice.  Proof works as before.

A General Solution  Task: To define another protocol that will work for strict preferences with disagree > 0s > 1s.  We have not found a simple efficient solution for this case.  Instead, we define a protocol based on Fitzi et al.’s work on detectable broadcast. Solves for a wide variety of preferences

Definition: Detectable Broadcast  A protocol for detectable broadcast must satisfy the following three conditions: Correctness: All honest players either abort or accept and output 0 or 1. If any honest player aborts, so does every other honest player. If no honest players abort then the output satisfies the security conditions of broadcast. Completeness: If all players are honest, all players accept (and therefore, achieve broadcast without error). Fairness: If any honest player aborts then the adversary receives no information about the sender’s bit (not relevant to us).

Detectable broadcast (cont.)  Fitzi’s protocol requires t + 5 rounds and O(n 8 (log n + k) 3 ) total bits of communication, where k is a security parameter.  Assumes computationally bounded adversary.  This compares to one round and n 2 bits for the previous protocol.  Using detectable broadcast is much less efficient.  However, this is not as bad when compared to protocals that achieve broadcast against malicious adversaries.

General protocol 1.Run the detectable broadcast protocol 2.- If an abort occurs, output adversary’s least-preferred outcome - Otherwise, output the result of the detectable broadcast protocol  Works any time the adversary has a known least-favorite outcome  Works for t<n

Conclusion  Rational adversaries do allow improved results on BA/broadcast.  For many adversary preferences, we have matching possibility and impossibility results.  More complicated adversary preferences remain to be analyzed.