ITEC452 Distributed Computing Lecture 2 – Part 3 Models in Distributed Systems Hwajung Lee.

Slides:



Advertisements
Similar presentations
Chapter 13 Leader Election. Breaking the symmetry in system Similar to distributed mutual exclusion problems, the first process to enter the CS can be.
Advertisements

Leader Election Breaking the symmetry in a system.
Distributed Leader Election Algorithms in Synchronous Ring Networks
CS 542: Topics in Distributed Systems Diganta Goswami.
Lecture 8: Asynchronous Network Algorithms
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
1 Causality. 2 The “happens before” relation happens before (causes)
Distributed Systems and Algorithms Sukumar Ghosh University of Iowa Spring 2014.
Byzantine Generals Problem: Solution using signed messages.
Failure Detectors. Can we do anything in asynchronous systems? Reliable broadcast –Process j sends a message m to all processes in the system –Requirement:
1 Principles of Reliable Distributed Systems Lecture 3: Synchronous Uniform Consensus Spring 2006 Dr. Idit Keidar.
CPSC 668Set 3: Leader Election in Rings1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
1 University of Freiburg Computer Networks and Telematics Prof. Christian Schindelhauer Distributed Coloring in Õ(  log n) Bit Rounds COST 293 GRAAL and.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 5: Synchronous Uniform.
Bit Complexity of Breaking and Achieving Symmetry in Chains and Rings.
1 Principles of Reliable Distributed Systems Recitation 8 ◊S-based Consensus Spring 2009 Alex Shraer.
Distributed Algorithms (22903) Lecturer: Danny Hendler Leader election in rings This presentation is based on the book “Distributed Computing” by Hagit.
Chapter Resynchsonous Stabilizer Chapter 5.1 Resynchsonous Stabilizer Self-Stabilization Shlomi Dolev MIT Press, 2000 Draft of Jan 2004, Shlomi.
Maximal Independent Set Distributed Algorithms for Multi-Agent Networks Instructor: K. Sinan YILDIRIM.
1 Principles of Reliable Distributed Systems Recitation 7 Byz. Consensus without Authentication ◊S-based Consensus Spring 2008 Alex Shraer.
Why do we need models? There are many dimensions of variability in distributed systems. Examples: interprocess communication mechanisms, failure classes,
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
CSE 812. Outline What is a distributed system/program? Program Models Program transformation.
Consensus and Its Impossibility in Asynchronous Systems.
Computer Science and Engineering Parallel and Distributed Processing CSE 8380 February 10, 2005 Session 9.
The Complexity of Distributed Algorithms. Common measures Space complexity How much space is needed per process to run an algorithm? (measured in terms.
1 Leader Election in Rings. 2 A Ring Network Sense of direction left right.
ITEC452 Distributed Computing Lecture 2 – Part 2 Models in Distributed Systems Hwajung Lee.
Lecture #14 Distributed Algorithms (II) CS492 Special Topics in Computer Science: Distributed Algorithms and Systems.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
Hwajung Lee.  Models are simple abstractions that help understand the variability -- abstractions that preserve the essential features, but hide the.
Hwajung Lee. Well, you need to capture the notions of atomicity, non-determinism, fairness etc. These concepts are not built into languages like JAVA,
Hwajung Lee. Why do we need these? Don’t we already know a lot about programming? Well, you need to capture the notions of atomicity, non-determinism,
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Vertex Coloring Distributed Algorithms for Multi-Agent Networks
Hwajung Lee.  Models are simple abstractions that help understand the variability -- abstractions that preserve the essential features, but hide the.
Hwajung Lee. Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.   i,j  V  i,j are non-faulty ::
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS Spring 2014 Prof. Jennifer Welch CSCE 668 Set 3: Leader Election in Rings 1.
ITEC452 Distributed Computing Lecture 2 – Part 2 Models in Distributed Systems Hwajung Lee.
Hwajung Lee. Primary standard = rotation of earth De facto primary standard = atomic clock (1 atomic second = 9,192,631,770 orbital transitions of Cesium.
Fault tolerance and related issues in distributed computing Shmuel Zaks GSSI - Feb
“Distributed Algorithms” by Nancy A. Lynch SHARED MEMORY vs NETWORKS Presented By: Sumit Sukhramani Kent State University.
Representing Block & Record Addresses
ITEC452 Distributed Computing Lecture 15 Self-stabilization Hwajung Lee.
Understanding Models. Modeling Communication: A message passing model System topology is a graph G = (V, E), where V = set of nodes ( sequential processes.
Distributed Systems and Algorithms Sukumar Ghosh University of Iowa Fall 2011.
Distributed Systems Lecture 9 Leader election 1. Previous lecture Middleware RPC and RMI – Marshalling 2.
Randomized Algorithms for Distributed Agreement Problems Peter Robinson.
1 AGREEMENT PROTOCOLS. 2 Introduction Processes/Sites in distributed systems often compete as well as cooperate to achieve a common goal. Mutual Trust/agreement.
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty ::
Model and complexity Many measures Space complexity Time complexity
Ad Hoc Radio Networks Radio Network is a collection of transmitter-receiver devices (denoted as notes). Each node can transmit data to nodes which exist.
Data Collection and Dissemination
Understanding Models.
Lecture 17: Leader Election
Hwajung Lee ITEC452 Distributed Computing Lecture 8 Representing Distributed Algorithms.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CS 5620 Distributed Systems and Algorithms
Agreement Protocols CS60002: Distributed Systems
Parallel and Distributed Algorithms
Inter Process Communication (IPC)
Distributed Algorithms (22903)
ITEC452 Distributed Computing Lecture 7 Mutual Exclusion
ITEC452 Distributed Computing Lecture 8 Distributed Snapshot
CSE 486/586 Distributed Systems Leader Election
ITEC452 Distributed Computing Lecture 3 Models in Distributed Systems
Multiprocessors and Multi-computers
CS 5620 Distributed Systems and Algorithms
CS 5620 Distributed Systems and Algorithms
Presentation transcript:

ITEC452 Distributed Computing Lecture 2 – Part 3 Models in Distributed Systems Hwajung Lee

Other classifications of models Reactive vs Transformational systems A reactive system never sleeps (like: a server or servers) A transformational (or non-reactive systems) reaches a fixed point after which no further change occurs in the system (Examples?) Named vs Anonymous systems In named systems, process id is a part of the algorithm. In anonymous systems, it is not so. All are equal. (-) Symmetry breaking is often a challenge. (+) Easy to switch one process by another with no side effect. Saves log N bits.

Model and complexity Many measures oSpace complexity oTime complexity oMessage complexity oBit complexity oRound complexity What do these mean? Consider broadcasting in an n-cube (n=3) source

Broadcasting using messages {Process 0} sends m to neighbors {Process i > 0} repeat receive m {m contains the value } ; if m is received for the first time then x[i] := m.value ; send x[i] to each neighbor j > I else discard m end if forever What is the (1) message complexity (2) space complexity per process? Each process j has a variable x[j] initially undefined

Broadcasting using shared memory {Process 0} x[0] := v {Process i > 0} repeat if  a neighbor j < i : x[i] ≠ x[j] then x[i] := x[j] {this is a step} else skip end if forever What is the time complexity? (i.e. how many steps are needed?) Can be arbitrarily large! WHY? Each process j has a variable x[j] initially undefined

Broadcasting using shared memory Now, use “large atomicity”, where in one step, a process j reads the states of ALL its neighbors of smaller id, and updates x[j] only when these are equal, and different from x[j]. What is the time complexity? How many steps are needed? The time complexity is now O(n 2 ) Each process j has a variable x[j] initially undefined

Time complexity in rounds Rounds are truly defined for synchronous systems. An asynchronous round consists of a number of steps where every process (including the slowest one) takes at least one step. How many rounds will you need to complete the broadcast using the large atomicity model? Each process j has a variable x[j] initially undefined