Università degli Studi dell’Aquila Academic Year 2014/2015 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Tuesday:

Slides:



Advertisements
Similar presentations
Chapter 5: Tree Constructions
Advertisements

Distributed Leader Election Algorithms in Synchronous Ring Networks
Lecture 8: Asynchronous Network Algorithms
Teaser - Introduction to Distributed Computing
Chapter 15 Basic Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
6.852: Distributed Algorithms Spring, 2008 Class 7.
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
Distributed Algorithms – 2g1513 Lecture 10 – by Ali Ghodsi Fault-Tolerance in Asynchronous Networks.
Lecture 7: Synchronous Network Algorithms
Outline. Theorem For the two processor network, Bit C(Leader) = Bit C(MaxF) = 2[log 2 ((M + 2)/3.5)] and Bit C t (Leader) = Bit C t (MaxF) = 2[log 2 ((M.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
Distributed Systems Brief Overview CNT Mobile & Pervasive Computing Dr. Sumi Helal University of Florida.
Università degli Studi dell’Aquila Academic Year 2009/2010 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Time: Monday:
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Complexity of Network Synchronization Raeda Naamnieh.
CPSC 668Set 1: Introduction1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 3 – Distributed Systems.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Asynchronous Consensus (Some Slides borrowed from ppt on Web.(by Ken Birman) )
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
Leader Election in Rings
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 2 – Distributed Systems.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Composition Model and its code. bound:=bound+1.
Election Algorithms and Distributed Processing Section 6.5.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
CIS 720 Distributed algorithms. “Paint on the forehead” problem Each of you can see other’s forehead but not your own. I announce “some of you have paint.
Università degli Studi dell’Aquila Academic Year 2013/2014 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Tuesday:
Broadcast & Convergecast Downcast & Upcast
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
1 A Modular Approach to Fault-Tolerant Broadcasts and Related Problems Author: Vassos Hadzilacos and Sam Toueg Distributed Systems: 526 U1580 Professor:
Introduction Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Università degli Studi dell’Aquila Academic Year 2012/2013 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Monday:
On Probabilistic Snap-Stabilization Karine Altisen Stéphane Devismes University of Grenoble.
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 1: Introduction 1.
1 Lectures on Parallel and Distributed Algorithms COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski Lectures on Parallel and Distributed.
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 8 Instructor: Haifeng YU.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
04/06/2016Applied Algorithmics - week101 Dynamic vs. Static Networks  Ideally, we would like distributed algorithms to be: dynamic, i.e., be able to.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
DISTRIBUTED SYSTEMS II A POLYNOMIAL LOCAL SOLUTION TO MUTUAL EXCLUSION Prof Philippas Tsigas Distributed Computing and Systems Research Group.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Università degli Studi dell’Aquila Academic Year 2015/2016 Course: Distributed Systems (integrated within the NEDAS curriculum with ‘’Web Algorithms’’,
Chap 15. Agreement. Problem Processes need to agree on a single bit No link failures A process can fail by crashing (no malicious behavior) Messages take.
Leader Election (if we ignore the failure detection part)
Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
Vertex Coloring Distributed Algorithms for Multi-Agent Networks
Impossibility of Distributed Consensus with One Faulty Process By, Michael J.Fischer Nancy A. Lynch Michael S.Paterson.
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Data Structures and Algorithm Analysis Graph Algorithms Lecturer: Jing Liu Homepage:
Maximal Independent Set
Lecture 9: Asynchronous Network Algorithms
Leader Election (if we ignore the failure detection part)
Parallel and Distributed Algorithms
Tree Construction (BFS, DFS, MST) Chapter 5
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Università degli Studi dell’Aquila
Lecture 8: Synchronous Network Algorithms
Presentation transcript:

Università degli Studi dell’Aquila Academic Year 2014/2015 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Tuesday: – – Room A1.3 Thursday: – – Room A1.4 Questions?: Tuesday (or send an to Slides plus other infos:

Distributed Systems In the old days: a number of workstations over a LAN Today Collaborative Computing Systems n Military command and control n Online strategy games n Massive computation Distributed Real-time Systems n Process Control n Navigation systems, Airline Traffic Monitoring (ATM) Mobile Ad hoc Networks Rescue Operations, emergency operations, robotics Wireless Sensor Networks Habitat monitoring, intelligent farming Grid and Cloud computing …

And then, the mother of all DS: the Internet

Two main ingredients in the course: Distributed Systems + Algorithms Distributed system (DS): Broadly speaking, we refer to a set of autonomous computational devices (say, processors) performing multiple operations/tasks simultaneously, and which influence reciprocally either by taking actions or by exchanging messages (using an underlying communication infrastructure) We will be concerned with the computational aspects of a DS. We will analyze a DS depending on the cooperativeness level of its processors: – Obedient: always cooperate honestly with the system  Classic field of distributed computing – Adversarial: may operate against the system  Classic field of fault-tolerance in DS – Selfish: choose strategically how to behave  Emerging field of game-theoretic aspects of DS

Two main ingredients in the course: Distributed Systems + Algorithms (2) Algorithm (informal definition): effective method, expressed as a finite list of well-defined instructions, for solving a given problem (e.g., calculating a function, implementing a goal, reaching a benefit, etc.) Regardless of their nature, processors in a DS follow a strategy which is dictated by an algorithm! We will analyze these algorithms in (almost) every respect: existence, correctness, finiteness, efficiency (computational complexity), effectiveness, robustness (w.r.t. to a given fault-tolerance concept), stability (w.r.t. to a given equilibrium concept in a strategic DS), etc.

Some Issues in Designing Distributed Applications Reliability (connectivity) Security (cryptography) Consistency (mutual exclusion) Cooperativeness (game theory) Fault-tolerance (failures, recoveries…) Performance: What is the efficiency of the designed algorithm? Scalability: How is the performance affected as the number of processors increases?

Course structure FIRST PART: Algorithms for COOPERATIVE DS 1.Leader Election 2.Minimum spanning tree 3.Maximal independent set SECOND PART: Algorithms for UNRELIABLE DS 1.Benign failures: consensus problem 2.Byzantine failures: consensus problem 3.Failure monitoring Mid-term Written Examination (last week of November): 10 multiple-choice tests, plus an open-answer question, plus the design of a distributed algorithm THIRD PART: Algorithms for CONCURRENT DS: Mutual exclusion FOURTH PART: Algorithms for STRATEGIC DS: Strategic equilibria theory FIFTH PART: Implementation theory for STRATEGIC DS 1.Algorithmic mechanism design (AMD) 2.AMD for Graph optimization problems Final Oral Examination: this will be concerned with either the whole program or just the second part of it, depending on the outcome of the mid-term exam

Cooperative distributed algorithms: Message Passing System A Formal Model

The System Topology: a network (connected undirected graph) Processors (nodes) Communication channels (edges) Degree of synchrony: asynchronous versus synchronous (universal clock) Degree of symmetry: anonymous (processors are indistinguishable) versus non-anonymous Degree of Uniformity: uniform (number of processors is unknown) versus non-uniform Local algorithm: the algorithm associated with each single processor Distributed algorithm: the “composition” of local algorithms General assumption: local algorithms are all the same (so- called homogenous setting), otherwise we could force the DS to behave as we want by just mapping different algorithms to different processors, which is unfeasible in reality!

Notation n processors: p 0, p 1, …, p n-1. Each processor knows nothing about the network topology, except for its neighbors, numbered from 1 to r Communication of each processor takes place only through message exchanges, using buffers associated with each neighbor, say OUT_BUFFER i and IN_BUFFER i, for each neighbor i=1,…,r. Q i : the state set for p i, containing a distinguished initial state; each state describes the internal status of the processor and the status of the buffers

Configuration and events System configuration: A vector [q 0,q 1,…,q n-1 ] where q i  Q i is the state of p i Events: Computation events (internal computations plus sending of messages), and message delivering (receipt of messages) events

Execution C 0  1 C 1  2 C 2  3 … where C 0 : The initial configuration C i : A configuration  i : An event

Asynchronous Systems No upper bound on delivering times Admissible execution: each message sent is eventually delivered

Synchronous Systems Each processor has a (universal) clock, and computation takes place in rounds. At each round each processor: 1. Reads the incoming messages buffer 2. Makes some internal computations 3. Sends messages which will be read in the next round.

Message Complexity We will assume that each message can be arbitrarily long According to this model, to establish the efficiency of an algorithm we will only count the total number of messages sent during any admissible execution of the algorithm (in other words, the number of message delivery events), regardless of their size

Time Complexity We will assume that each processor has unlimited computational power According to this model, to establish the efficiency of a synchronous algorithm, we will simply count the number of rounds until termination. Asynchronous systems: the time complexity is not really meaningful, since processors do not have a consistent notion of time

Example: Distributed Depth-First Search visit of a graph –Visiting a (connected) graph G=(V,E) means to explore all the nodes and edges of the graph –General overview of a sequential algorithm: –Begin at some source vertex, r 0 –when reaching any vertex v »if v has an unvisited neighbor, then visit it and proceed further from it »otherwise, return to parent(v) –when we reach the parent of some vertex v such that parent(v) = NULL, then we terminate since v = r 0 –DFS defines a tree, with r 0 as the root, which spans all vertices in the graph –sequential time complexity = Θ(|E|+|V|) (we use Θ notation because every execution of the algorithm costs exactly |E|+|V|, in an asymptotic sense)

DFS: an example (1/2)

DFS: an example (2/2)

Distributed DFS: an asynchronous algorithm –Distributed version (token-based): the token traverses the graph in a depth-first manner using the algorithm described above 1.Start exploration (visit) at a waking-up node (root) r 2.When v is visited for the first time: 2.1 Inform all of its neighbors that it has been visited: 2.2 Wait for acknowledgment from all neighbors: 2.3 Resume the DFS process; 2.4 If no unvisited neighbor node exists, then pass token back to the parent node –Message complexity is Θ(|E|) (optimal, because of the trivial lower bound of  (|E|) induced by the fact that every node must know the status of each of its neighbors – this requires at least a message for each graph edge)

Distributed DFS (cont’d.) Time complexity analysis (synchronous DS) We ensure that vertices visited for the first time know which of their neighbors have been visited; thus we make no unnecessary vertex explorations: –algorithm: freeze the DFS process; inform all neighbors of v that v has been visited; get Ack messages from those neighbors; restart DFS process  constant number of rounds (i.e., 3) for each new discovered node –|V| nodes are discovered  time complexity = Θ(|V|) Homework: What does it happen to the algorithm’s complexity if we do not inform the neighbors about having been visited?