Midterm Exam 06: Exercise 2 Atomic Shared Memory.

Slides:



Advertisements
Similar presentations
Ricart and Agrawala’s Algorithm
Advertisements

CS 542: Topics in Distributed Systems Diganta Goswami.
The weakest failure detector question in distributed computing Petr Kouznetsov Distributed Programming Lab EPFL.
1 © R. Guerraoui The Power of Registers Prof R. Guerraoui Distributed Programming Laboratory.
Leader Election Let G = (V,E) define the network topology. Each process i has a variable L(i) that defines the leader.  i,j  V  i,j are non-faulty.
1 Algorithms and protocols for distributed systems We have defined process groups as having peer or hierarchical structure and have seen that a coordinator.
1 © R. Guerraoui Implementing the Consensus Object with Timing Assumptions R. Guerraoui Distributed Programming Laboratory.
CIS 720 Concurrency Control. Timestamp-based concurrency control Assign a timestamp ts(T) to each transaction T. Each data item x has two timestamps:
Today’s Agenda  Midterm: Nov 3 or 10  Finish Message Passing  Race Analysis Advanced Topics in Software Engineering 1.
1 Principles of Reliable Distributed Systems Lectures 11: Authenticated Byzantine Consensus Spring 2005 Dr. Idit Keidar.
What we will cover…  Distributed Coordination 1-1.
1 Principles of Reliable Distributed Systems Lecture 10: Atomic Shared Memory Objects and Shared Memory Emulations Spring 2007 Prof. Idit Keidar.
1 © R. Guerraoui - Shared Memory - R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Fall 2006 Prof. Jennifer Welch.
CPSC 668Set 9: Fault Tolerant Consensus1 CPSC 668 Distributed Algorithms and Systems Spring 2008 Prof. Jennifer Welch.
CPSC 668Set 17: Fault-Tolerant Register Simulations1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Composition Model and its code. bound:=bound+1.
6.4 Data and File Replication Gang Shen. Why replicate  Performance  Reliability  Resource sharing  Network resource saving.
1 © R. Guerraoui Seth Gilbert Professor: Rachid Guerraoui Assistants: M. Kapalka and A. Dragojevic Distributed Programming Laboratory.
CSE 486/586, Spring 2013 CSE 486/586 Distributed Systems Mutual Exclusion Steve Ko Computer Sciences and Engineering University at Buffalo.
Computer Science Lecture 12, page 1 CS677: Distributed OS Last Class Vector timestamps Global state –Distributed Snapshot Election algorithms –Bully algorithm.
1 Lectures on Parallel and Distributed Algorithms COMP 523: Advanced Algorithmic Techniques Lecturer: Dariusz Kowalski Lectures on Parallel and Distributed.
CS425 /CSE424/ECE428 – Distributed Systems – Fall 2011 Material derived from slides by I. Gupta, M. Harandi, J. Hou, S. Mitra, K. Nahrstedt, N. Vaidya.
Coordination and Agreement. Topics Distributed Mutual Exclusion Leader Election.
Byzantine fault-tolerance COMP 413 Fall Overview Models –Synchronous vs. asynchronous systems –Byzantine failure model Secure storage with self-certifying.
1 Chapter 9 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
1 © R. Guerraoui Regular register algorithms R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch.
1 Chapter 10 Synchronization Algorithms and Concurrent Programming Gadi Taubenfeld © 2014 Synchronization Algorithms and Concurrent Programming Synchronization.
1 Distributed Coordination. 2 Topics Event Ordering Mutual Exclusion Atomicity of Transactions– Two Phase Commit (2PC) Deadlocks  Avoidance/Prevention.
Lecture 11-1 Computer Science 425 Distributed Systems CS 425 / CSE 424 / ECE 428 Fall 2010 Indranil Gupta (Indy) September 28, 2010 Lecture 11 Leader Election.
Distributed systems Consensus Prof R. Guerraoui Distributed Programming Laboratory.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Mutual Exclusion & Leader Election Steve Ko Computer Sciences and Engineering University.
Hwajung Lee.  Improves reliability  Improves availability ( What good is a reliable system if it is not available?)  Replication must be transparent.
DISTRIBUTED ALGORITHMS Spring 2014 Prof. Jennifer Welch Set 9: Fault Tolerant Consensus 1.
“Towards Self Stabilizing Wait Free Shared Memory Objects” By:  Hopeman  Tsigas  Paptriantafilou Presented By: Sumit Sukhramani Kent State University.
“Distributed Algorithms” by Nancy A. Lynch SHARED MEMORY vs NETWORKS Presented By: Sumit Sukhramani Kent State University.
Mutual Exclusion Algorithms. Topics r Defining mutual exclusion r A centralized approach r A distributed approach r An approach assuming an organization.
CSE 486/586 CSE 486/586 Distributed Systems Leader Election Steve Ko Computer Sciences and Engineering University at Buffalo.
Distributed Systems Lecture 9 Leader election 1. Previous lecture Middleware RPC and RMI – Marshalling 2.
1 © R. Guerraoui Set-Agreement (Generalizing Consensus) R. Guerraoui.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Oct 1, 2015 Lecture 12: Mutual Exclusion All slides © IG.
The consensus problem in distributed systems
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
CSE 486/586 Distributed Systems Leader Election
Atomic register algorithms
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Computing with anonymous processes
Outline Theoretical Foundations - continued Vector clocks - review
EEC 688/788 Secure and Dependable Computing
EEC 688/788 Secure and Dependable Computing
CSE 486/586 Distributed Systems Leader Election
EEC 688/788 Secure and Dependable Computing
Distributed Transactions
EEC 688/788 Secure and Dependable Computing
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Exercises for Chapter 11: TIME AND GLOBAL STATES
EEC 688/788 Secure and Dependable Computing
Distributed Algorithms (22903)
EEC 688/788 Secure and Dependable Computing
Constructing Reliable Registers From Unreliable Byzantine Components
Distributed Algorithms (22903)
- Atomic register specification -
EEC 688/788 Secure and Dependable Computing
R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch
Distributed systems Consensus
Broadcasting with failures
CIS 720 Concurrency Control.
R. Guerraoui Distributed Programming Laboratory lpdwww.epfl.ch
CSE 486/586 Distributed Systems Leader Election
Fault-Tolerant SemiFast Implementations of Atomic Read/Write Registers
Presentation transcript:

Midterm Exam 06: Exercise 2 Atomic Shared Memory

Midterm Exam: Exercise 2 – Atomic shared memorySlide 2 Atomic register Every failed (write) operation appears to be either complete or not to have been invoked at allEvery failed (write) operation appears to be either complete or not to have been invoked at allAnd Every complete operation appears to be executed at some instant between its invocation and reply time eventsEvery complete operation appears to be executed at some instant between its invocation and reply time events In other words, atomic register is:In other words, atomic register is: Regular (READ returns the latest value written, or one of the values written concurrently), and READ rd’ that follows some (complete) read rd does not return an older value (than rd)

Midterm Exam: Exercise 2 – Atomic shared memorySlide 3 Non-Atomic Execution 1 P2 P1 W(5) W(6) R 1 () -> 5R 2 () -> 0R 3 () -> 25

Midterm Exam: Exercise 2 – Atomic shared memorySlide 4 Non-Atomic Execution 2 P2 P1 W(5) W(6) R 1 () -> 5R 2 () -> 6R 3 () -> 5

Midterm Exam: Exercise 2 – Atomic shared memorySlide 5 Non-Atomic Execution 3 P2 P1 W(5) R() -> 6 P2 P1 W(5) R() -> 5 W(6) crash

Midterm Exam: Exercise 2 – Atomic shared memorySlide 6 Atomic Execution 1 P2 P1 W(5) W(6) R 1 () -> 5R 2 () -> 5R 3 () -> 5

Midterm Exam: Exercise 2 – Atomic shared memorySlide 7 Atomic Execution 2 P2 P1 W(5) W(6) R 1 () -> 5R 2 () -> 6R 3 () -> 6

Midterm Exam: Exercise 2 – Atomic shared memorySlide 8 P2 P1 W(5) R() -> 5 W(6) crash Atomic Execution 3

Midterm Exam: Exercise 2 – Atomic shared memorySlide 9 P2 P1 W(5) R() -> 5 P2 P1 W(5) R() -> 6 W(6) crash Atomic Execution 4

Midterm Exam: Exercise 2 – Atomic shared memorySlide 10 Best case complexity We build algorithms for the worst case (or unlucky) situationsWe build algorithms for the worst case (or unlucky) situations Asynchrony Concurrency Many failures However, very frequently situation is not that bad (lucky executions)However, very frequently situation is not that bad (lucky executions) Synchrony No concurrency Few failures (or none at all) Practical algorithms should take advantage of the lucky executionsPractical algorithms should take advantage of the lucky executions

Midterm Exam: Exercise 2 – Atomic shared memorySlide 11 Exercise 2 Give a 1-writer n-reader atomic register implementation (n=5, majority of processes is correct) in whichGive a 1-writer n-reader atomic register implementation (n=5, majority of processes is correct) in which L2: all read/write operations* complete in at most 2 round- trips In every round-trip, a client (writer or reader) sends a message to all processes and awaits response from some subset of processes L1: all lucky read/write operations* should complete in a single round-trip A read/write operation op is lucky if: The system is synchronous: messages among correct processes delivered within the time  (known to all correct processes) op is not concurrent with any write operation At most one process is faulty *invoked by a correct client

Midterm Exam: Exercise 2 – Atomic shared memorySlide 12 The majority algorithm [ABD95] All reads and writes complete in a single round-tripAll reads and writes complete in a single round-trip A client sends a message to all processes and waits for response from a majority However, this algorithm implements only regular register (not atomic)However, this algorithm implements only regular register (not atomic) To make the algorithm atomic:To make the algorithm atomic: readers impose a value with a highest timestamp to a majority of processes (requires a second round-trip)

Midterm Exam: Exercise 2 – Atomic shared memorySlide 13 Lucky operations If the operation is lucky, the client will be able to receive (at least) 4 (out of 5) responsesIf the operation is lucky, the client will be able to receive (at least) 4 (out of 5) responses timer=2  t t+2  t+ 

Midterm Exam: Exercise 2 – Atomic shared memorySlide 14 Solution In the following slides we modify the majority algorithm of [ABD95]In the following slides we modify the majority algorithm of [ABD95] [ABD95] is given in slides 30-32, Regular Register Algorithms lecture notes – in Shared Memory part 2

Midterm Exam: Exercise 2 – Atomic shared memorySlide 15 Algorithm - Write() Write(v) at p1 (the writer)Write(v) at p1 (the writer) ts1++ trigger(timer=2  ) send [W,ts1,v] to all when receive [W,ts1,ack] from majority Wait for expiration of timer If received 4 acks then Return ok else Send [W2,ts1,v] to all when receive [W2,ts1,ack] from majority Return ok At piAt pi when receive [W,ts1, v] from p1 If ts1 > sni then vi := v sni := ts1 send [W,ts1,ack] to p1 when receive [W2,ts1, v] from p1 If ts1 > sni2 then vi2:= v sni2 := ts1 send [W2,ts1,ack] to p1

Midterm Exam: Exercise 2 – Atomic shared memorySlide 16 How the (lucky) write works p1(writer) p2 p3 p4 p5 write(v) ts1++ %ts1=1 trigger(timer=2  ) send [W,ts1,v] to all vi,sniv2i,sn2i vi,sniv2i,sn2i vi,sniv2i,sn2i vi,sniv2i,sn2i v0,0v0,0 v0,0 v0,0 v0,0 v0,0 v0,0 v0,0 v0,0 v0,0 v1,1 v1,1 v1,1 v1,1 ACK

Midterm Exam: Exercise 2 – Atomic shared memorySlide 17 How the (unlucky) write works p1(writer) p2 p3 p4 p5 write(v) ts1++ %ts1=1 trigger(timer=2  ) send [W,ts1,v] to all vi,sniv2i,sn2i vi,sniv2i,sn2i vi,sniv2i,sn2i vi,sniv2i,sn2i v0,0v0,0 v0,0 v0,0 v0,0 v0,0 v0,0 v0,0v0,0 v1,1 v1,1 v1,1 ACK v0,0 send[W2,ts1,v] to all wait for majority of acks v1,1 v1,1 v1,1

Midterm Exam: Exercise 2 – Atomic shared memorySlide 18 Algorithm - Read() Read() at piRead() at pi rsi++ trigger(timer=2  ) send [R,rsi] to all when receive [R,rsi,snj,vj] from majority v := vj with the largest snj Return v At piAt pi when receive [R,rsj] from pj send [R,rsj,sni,vi] to pj

Midterm Exam: Exercise 2 – Atomic shared memorySlide 19 Algorithm - Read() Read() at piRead() at pi rsi++ trigger(timer=2  ) send [R,rsi] to all when receive [R,rsi,snj,vj,sn2j,v2j] from majority Wait for expiration of timer v := vj with the largest snj Return v At piAt pi when receive [R,rsj] from pj send [R,rsj,sni,vi,sn2i,v2i] to pj

Midterm Exam: Exercise 2 – Atomic shared memorySlide 20 Algorithm - Read() Read() at piRead() at pi rsi++ trigger(timer=2  ) send [R,rsi] to all when receive [R,rsi,snj,vj,sn2j,v2j] from majority Wait for expiration of timer v := vj or v2j with the largest snj or sn2j If v is some v2j or there are 3 responses where vj=v and snj=snMAX then Return v else Send [W,ts1,v] to all when receive [W,ts1,ack] from majority Return ok At piAt pi when receive [R,rsj] from pj send [R,rsj,sni,vi,sn2i,v2i] to pj

Midterm Exam: Exercise 2 – Atomic shared memorySlide 21 How the lucky read works p2 p3 p4 p5 p1 Following the lucky write p5 knows that a value with the largest timestamp has already been imposed to a majority (in our case p2,p3 and p4)

Midterm Exam: Exercise 2 – Atomic shared memorySlide 22 How the lucky read works p2 p3 p4 p5 p1 Following the unlucky (2 round-trip) write p5 saw a (at least one) « green » value with the largest timestamp: hence, this value has already been imposed to a majority in the 1st round-trip of the write

Midterm Exam: Exercise 2 – Atomic shared memorySlide 23 Unlucky read Must impose a value with the largest timestamp to a majority of processesMust impose a value with the largest timestamp to a majority of processes If v is some v2j or there are 3 responses where vj=v and snj=snMAX then Return v else Send [W,ts1,v] to all when receive [W,ts1,ack] from majority Return ok W not W2!W not W2! Readers impose a value on « yellow » not « green » variables Only the writer writes into the « green » variables (v2i,sn2i)

Midterm Exam: Exercise 2 – Atomic shared memorySlide 24 An offline exercise Try to rigorously prove correctness of this algorithmTry to rigorously prove correctness of this algorithm Proving correctness may appear on the final examProving correctness may appear on the final exam