Università degli Studi dell’Aquila Academic Year 2009/2010 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Time: Monday: 10.15.

Slides:



Advertisements
Similar presentations
Energy-Efficient Distributed Algorithms for Ad hoc Wireless Networks Gopal Pandurangan Department of Computer Science Purdue University.
Advertisements

Costas Busch Louisiana State University CCW08. Becomes an issue when designing algorithms The output of the algorithms may affect the energy efficiency.
Chapter 5: Tree Constructions
* Distributed Algorithms in Multi-channel Wireless Ad Hoc Networks under the SINR Model Dongxiao Yu Department of Computer Science The University of Hong.
Distributed Systems Major Design Issues Presented by: Christopher Hector CS8320 – Advanced Operating Systems Spring 2007 – Section 2.6 Presentation Dr.
Distributed Leader Election Algorithms in Synchronous Ring Networks
Lecture 8: Asynchronous Network Algorithms
Teaser - Introduction to Distributed Computing
Failure Detection The ping-ack failure detector in a synchronous system satisfies – A: completeness – B: accuracy – C: neither – D: both.
Chapter 15 Basic Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
Topological Sort Topological sort is the list of vertices in the reverse order of their finishing times (post-order) of the depth-first search. Topological.
Lecture 7: Synchronous Network Algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Fall 2011 Prof. Jennifer Welch CSCE 668 Self Stabilization 1.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
1 Complexity of Network Synchronization Raeda Naamnieh.
CPSC 668Set 1: Introduction1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 2: Basic Graph Algorithms1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Fault-Tolerant Consensus. 2 Failures in Distributed Systems Link failure: A link fails and remains inactive; the network may get partitioned Crash:
CPSC 668Self Stabilization1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 Brief Announcement: Distributed Broadcasting and Mapping Protocols in Directed Anonymous Networks Michael Langberg: Open University of Israel Moshe Schwartz:
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Distributed systems Module 2 -Distributed algorithms Teaching unit 1 – Basic techniques Ernesto Damiani University of Bozen Lesson 2 – Distributed Systems.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
Election Algorithms and Distributed Processing Section 6.5.
Message Passing Systems A Formal Model. The System Topology – network (connected undirected graph) Processors (nodes) Communication channels (edges) Algorithm.
CIS 720 Distributed algorithms. “Paint on the forehead” problem Each of you can see other’s forehead but not your own. I announce “some of you have paint.
Università degli Studi dell’Aquila Academic Year 2013/2014 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Tuesday:
Complexity of Bellman-Ford Theorem. The message complexity of Bellman-Ford algorithm is exponential. Proof outline. Consider a topology with an even number.
Broadcast & Convergecast Downcast & Upcast
Distributed Computing 5. Synchronization Shmuel Zaks ©
Università degli Studi dell’Aquila Academic Year 2014/2015 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Tuesday:
Introduction Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
Università degli Studi dell’Aquila Academic Year 2012/2013 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Schedule: Monday:
Distributed Algorithms – 2g1513 Lecture 9 – by Ali Ghodsi Fault-Tolerance in Distributed Systems.
Review for Exam 2. Topics included Deadlock detection Resource and communication deadlock Graph algorithms: Routing, spanning tree, MST, leader election.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 1: Introduction 1.
1 Maximal Independent Set. 2 Independent Set (IS): In a graph G=(V,E), |V|=n, |E|=m, any set of nodes that are not adjacent.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
04/06/2016Applied Algorithmics - week101 Dynamic vs. Static Networks  Ideally, we would like distributed algorithms to be: dynamic, i.e., be able to.
DISTRIBUTED SYSTEMS II A POLYNOMIAL LOCAL SOLUTION TO MUTUAL EXCLUSION Prof Philippas Tsigas Distributed Computing and Systems Research Group.
Random Graph Generator University of CS 8910 – Final Research Project Presentation Professor: Dr. Zhu Presented: December 8, 2010 By: Hanh Tran.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
Università degli Studi dell’Aquila Academic Year 2015/2016 Course: Distributed Systems (integrated within the NEDAS curriculum with ‘’Web Algorithms’’,
Leader Election (if we ignore the failure detection part)
Several sets of slides by Prof. Jennifer Welch will be used in this course. The slides are mostly identical to her slides, with some minor changes. Set.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 2: Basic Graph Algorithms 1.
Vertex Coloring Distributed Algorithms for Multi-Agent Networks
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
CIS 825 Review session. P1: Assume that processes are arranged in a ring topology. Consider the following modification of the Lamport’s mutual exclusion.
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
CSE 486/586 CSE 486/586 Distributed Systems Leader Election Steve Ko Computer Sciences and Engineering University at Buffalo.
1 AGREEMENT PROTOCOLS. 2 Introduction Processes/Sites in distributed systems often compete as well as cooperate to achieve a common goal. Mutual Trust/agreement.
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty ::
Michael Langberg: Open University of Israel
Distributed Processing Election Algorithm
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Distributed Algorithms (22903)
Lecture 9: Asynchronous Network Algorithms
Leader Election (if we ignore the failure detection part)
Tree Construction (BFS, DFS, MST) Chapter 5
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Lecture 8: Synchronous Network Algorithms
Distributed Algorithms (22903)
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Presentation transcript:

Università degli Studi dell’Aquila Academic Year 2009/2010 Course: Algorithms for Distributed Systems Instructor: Prof. Guido Proietti Time: Monday: – – Room 0.6 Wednesday: – – Room 0.6 Questions?: Wednesday Slides plus other infos:

Distributed System Set of computational devices connected by a communication network. Old platform : Usually a number of WSs over a LAN Now, ranges from a LAN to a sensor network to a mobile network Each node in a DS : n is autonomous n communicates by messages n needs to synchronize with others to achieve a common goal (load balancing, fault tolerance, an application..)

Modern Distributed Applications Collaborative computing n Military command and control n Online strategy games n Massive computation Distributed Real-time Systems n Process Control n Navigation systems, Airline Traffic Monitoring (ATM) Mobile Ad hoc Networks Rescue Operations, emergency operations, robotics Wireless Sensor Networks Habitat monitoring, intelligent farming Grid Stock market …

The Internet

Some Issues in Building Distributed Applications Reliability (connectivity) Security (cryptography) Consistency (mutual exclusion) Cooperativeness (game theory) Fault-tolerance (failures, recoveries…) Scalability: How is the performance affected as the number of nodes increase ? Performance: What is the complexity of the designed algorithm?

Course structure FIRST PART: Algorithms for COOPERATIVE DS 1.Leader Election 2.Minimum spanning tree 3.Maximal independent set SECOND PART: Algorithms for UNRELIABLE DS 1.Benign failures: consensus problem 2.Byzantin failures: consensus problem THIRD PART: Algorithms for CONCURRENT DS 1.Mutual exclusion Mid-term Written Examination: First (?) week of December FOURTH PART: DS SECURITY 1.Elements of cryptography FIFTH PART: Algorithms for NON COOPERATIVE (STRATEGIC) DS 1.Strategic equilbria theory 2.Algorithmic mechanism design (AMD) 3.AMD for Graph optimization problems SIXTH PART (???): Algorithms for WIRELESS DS Final Oral Examination: depending of the mid-term rate

Cooperative distributed algorithms: Message Passing System A Formal Model

The System Topology: a network (connected undirected graph) Processors (nodes) Communication channels (edges) Degree of synchrony: asynchronous versus synchronous (universal clock) Degree of symmetry: anonymous (processors are indistinguishable) versus non-anonymous Degree of Uniformity: uniform (number of processors is unknown) versus non-uniform Local algorithm: the algorithm associated to a single processor Distributed algorithm: the “composition” of local algorithms

Notation n processors: p 0, p 1, …, p n-1. Each processor knows nothing about the network topology, except for its neighbors, numbered from from 1 to r Communication takes place only through message exchanges, using buffers associated with each neighbor, namely outbuf i [k], inbuf i [k], i=1,…,r. q i : the state set for p i, containing a distinguished initial state; each state describes the internal status of the processor and the status of the buffers

Configuration and events System configuration: A vector [q 0,q 1,…,q n-1 ] where q i is the state of p i Events: Computation events (internal computations plus sending of messages), and message delivering events

Execution C 0  1 C 1  2 C 2  3 … where C i : A configuration  i : An event C 0 : An initial configuration

Asynchronous Systems No upper bound on delivering times Admissible execution: each message sent is eventually delivered

Synchronous Systems Each processor has a (common) clock, and computation takes place in rounds. At each round each processor: 1. Reads the incoming messages buffer 2. Makes some internal computations 3. Sends messages which will be read in the next round.

Message Complexity The total number of messages sent during any admissible execution of the algorithm. In other words, the number of delivery events.

Time Complexity Synchronous: The number of rounds until termination. Asynchronous: not really meaningful

Example: Distributed Depth-First Search –General overview of a sequential algorithm: –Begin at some source vertex, r 0 –when reaching any vertex v »if v has an unvisited neighbor, then visit it and proceed from it »otherwise, return to parent(v) –when we reach the parent of some vertex v such that parent(v) = NULL, then we terminate since v = r 0 –DFS defines a tree, with r 0 as the root, which reaches all vertices in the graph –“back edges” = graph edges not in tree –sequential time complexity = O(|edges|+|nodes|)

DFS: an example (1/2)

DFS: an example (2/2)

Distributed DFS (cont’d.) –Distributed version (token-based): the token traverses the graph in a depth-first manner using the algorithm described above 1.Start exploration (visit) at root r. 2.When v is visited for the first time: 2.1 Inform all neighbors of v that v has been visited. 2.2 Wait for acknowledgment from all neighbors. 2.3 Resume the DFS process. –Message complexity is O(|E|) (optimal, because of the lower bound of  (|edges|) to explore every edge) »note that edges are not examined from both endpoints; when edges (v,w) is examined by v, w then knows that v has been visited

Distributed DFS (cont’d.) Time complexity analysis (sync. DS) We ensure that vertices visited for the first time know which of their neighbors have/have not been visited; thus we make no unnecessary vertex explorations: »algorithm: freeze the DFS process; inform all neighbors of v that v has been visited; get Ack messages from those neighbors; restart DFS process  constant number of rounds for each new discovered node »only O(n) nodes are discovered  time complexity = O(n)