CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.

Slides:



Advertisements
Similar presentations
Lock-Based Concurrency Control
Advertisements

6.852: Distributed Algorithms Spring, 2008 Class 7.
CS542: Topics in Distributed Systems Distributed Transactions and Two Phase Commit Protocol.
(c) Oded Shmueli Distributed Recovery, Lecture 7 (BHG, Chap.7)
CS 603 Handling Failure in Commit February 20, 2002.
Lecture 11 Recoverability. 2 Serializability identifies schedules that maintain database consistency, assuming no transaction fails. Could also examine.
1 CSIS 7102 Spring 2004 Lecture 8: Recovery (overview) Dr. King-Ip Lin.
CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture X: Transactions.
COS 461 Fall 1997 Transaction Processing u normal systems lose their state when they crash u many applications need better behavior u today’s topic: how.
Database Systems, 8 th Edition Concurrency Control with Time Stamping Methods Assigns global unique time stamp to each transaction Produces explicit.
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
Exercises for Chapter 17: Distributed Transactions
CMPT Dr. Alexandra Fedorova Lecture X: Transactions.
ICS 421 Spring 2010 Distributed Transactions Asst. Prof. Lipyeow Lim Information & Computer Science Department University of Hawaii at Manoa 3/16/20101Lipyeow.
Transaction Processing Lecture ACID 2 phase commit.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Recovery 10/18/05. Implementing atomicity Note, when a transaction commits, the portion of the system implementing durability ensures the transaction’s.
CMPT 431 Dr. Alexandra Fedorova Lecture VIII: Time And Global Clocks.
Distributed Systems Fall 2010 Transactions and concurrency control.
CS 582 / CMPE 481 Distributed Systems Concurrency Control.
Systems of Distributed Systems Module 2 -Distributed algorithms Teaching unit 3 – Advanced algorithms Ernesto Damiani University of Bozen Lesson 6 – Two.
Computer Science Lecture 17, page 1 CS677: Distributed OS Last Class: Fault Tolerance Basic concepts and failure models Failure masking using redundancy.
Non-blocking Atomic Commitment Aaron Kaminsky Presenting Chapter 6 of Distributed Systems, 2nd edition, 1993, ed. Mullender.
1 ICS 214B: Transaction Processing and Distributed Data Management Replication Techniques.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Chapter 18: Distributed Coordination (Chapter 18.1 – 18.5)
1 CS 194: Distributed Systems Distributed Commit, Recovery Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering.
©Silberschatz, Korth and Sudarshan19.1Database System Concepts Distributed Transactions Transaction may access data at several sites. Each site has a local.
1 More on Distributed Coordination. 2 Who’s in charge? Let’s have an Election. Many algorithms require a coordinator. What happens when the coordinator.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Distributed Systems Fall 2009 Distributed transactions.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
Transactions and concurrency control
Distributed Deadlocks and Transaction Recovery.
CS162 Section Lecture 10 Slides based from Lecture and
Distributed Transactions March 15, Transactions What is a Distributed Transaction?  A transaction that involves more than one server  Network.
Distributed Transactions: Distributed deadlocks Recovery techniques.
Chapter 19 Recovery and Fault Tolerance Copyright © 2008.
Transaction Communications Yi Sun. Outline Transaction ACID Property Distributed transaction Two phase commit protocol Nested transaction.
Distributed Transactions Chapter 13
Distributed Transactions
Consensus and Its Impossibility in Asynchronous Systems.
Operating Systems Distributed Coordination. Topics –Event Ordering –Mutual Exclusion –Atomicity –Concurrency Control Topics –Event Ordering –Mutual Exclusion.
Concurrency Server accesses data on behalf of client – series of operations is a transaction – transactions are atomic Several clients may invoke transactions.
Distributed Transaction Management, Fall 2002Lecture Distributed Commit Protocols Jyrki Nummenmaa
University of Tampere, CS Department Distributed Commit.
Databases Illuminated
XA Transactions.
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Distributed Transactions Chapter – Vidya Satyanarayanan.
Transactions and Concurrency Control. Concurrent Accesses to an Object Multiple threads Atomic operations Thread communication Fairness.
IM NTU Distributed Information Systems 2004 Distributed Transactions -- 1 Distributed Transactions Yih-Kuen Tsay Dept. of Information Management National.
Transaction Management Transparencies. ©Pearson Education 2009 Chapter 14 - Objectives Function and importance of transactions. Properties of transactions.
Revisiting failure detectors Some of you asked questions about implementing consensus using S - how does it differ from reaching consensus using P. Here.
Introduction to Distributed Databases Yiwei Wu. Introduction A distributed database is a database in which portions of the database are stored on multiple.
Multidatabase Transaction Management COP5711. Multidatabase Transaction Management Outline Review - Transaction Processing Multidatabase Transaction Management.
Distributed Transactions What is a transaction? (A sequence of server operations that must be carried out atomically ) ACID properties - what are these.
10-Jun-16COMP28112 Lecture 131 Distributed Transactions.
1 Fault Tolerance and Recovery Mostly taken from
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
Commit Protocols CS60002: Distributed Systems
Outline Announcements Fault Tolerance.
Chapter 10 Transaction Management and Concurrency Control
Distributed Database Management Systems
Distributed Transactions
Distributed Databases Recovery
UNIVERSITAS GUNADARMA
Transaction Communication
Transactions, Properties of Transactions
Presentation transcript:

CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XI: Distributed Transactions

2 CMPT 401 Summer 2007 © A. Fedorova A Distributed Transaction A transaction is distributed across n processes. Each process can decide to commit or abort the transaction A transaction must commit on all sites or abort on all sites Application Program / Coordinating Server Participating Site commit

3 CMPT 401 Summer 2007 © A. Fedorova Comparison With Single-Site Transactions For single-site transactions we talked about: –Committing a transaction –Recovery (for atomicity, consistency and durability) –Concurrency control (for atomicity, consistency and isolation) For distributed transactions we will talk about: –Distributed commitment –Distributed recovery –Distributed concurrency control

4 CMPT 401 Summer 2007 © A. Fedorova A Distributed Transaction # Some participant (the invoker) executes: send [T_START: transaction, Dc, participants] to all participants # All participants (including the invoker) execute: upon (receipt of T_START: transaction, Dc, participants] Cknow = local_clock # Perform operations requested by transaction if(willing and able to make updates permanent) then vote = YES else vote = NO #Decide commit or abort for the transaction atomic_commitment(transaction, participants)

5 CMPT 401 Summer 2007 © A. Fedorova Atomic Commitment Problem All processes must decide to commit or abort –Is this possible in an asynchronous system? No. This is like consensus, and we saw that consensus is impossible But it is still used in asynchronous systems under certain assumptions but with no guarantees about termination: –Communication is reliable –Processes can crash –Processes recover from failure –Processes can log their state in stable storage –Stable storage survives crashes and is accessible upon restart

6 CMPT 401 Summer 2007 © A. Fedorova Properties of Atomic Commitment Property 1: All participants that decide reach the same decision Property 2: If any participant decides commit, then all participants must have voted YES Property 3: If all participants vote YES and no failures occur, then all participants decide commit Property 4: Each participant decides at most once (a decision is irreversible)

7 CMPT 401 Summer 2007 © A. Fedorova Components of Atomic Commitment Normal execution –The steps executed while no failures occur Termination protocol –When a site fails, the correct sites should still be able to decide on the outcome of pending transactions. –They run a termination protocol to decide on all pending transactions. Recovery –When a site fails and then restarts it has to perform recovery for all transactions that it has not yet committed appropriately –Single site recovery: abort all transactions that were active at the time of the failure –Distributed system: might have to ask around; maybe an active transaction was committed in the rest of the system

8 CMPT 401 Summer 2007 © A. Fedorova Blocking vs. Non-Blocking Atomic Commitment Blocking Atomic Commitment: correct participants may be prevented from terminating the transaction due to failures of other part of the system Non-Blocking Atomic Commitment: transactions terminate consistently at all participating sites even in the presence of failures Non-blocking atomic commitment is not possible in systems with unreliable communication Non-blocking algorithms exist if one assumes reliable communication and a synchronous system

9 CMPT 401 Summer 2007 © A. Fedorova Two-Phase Commit Protocol (2PC) An atomic commitment protocol –Phase 1: Decide commit or abort –Phase 2: Get the final decision from the coordinator, and execute the final decision We will study the protocol in a synchronous system –Assume a message arrives within interval δ –Assume we can compute Dc – local time required to complete the transaction –Assume we can compute Db – additional delay associated with broadcast

10 CMPT 401 Summer 2007 © A. Fedorova Two-Phase Commit Protocol (I) # Executed by coordinator procedure atomic_commitment(transaction, participants) send [VOTE_REQUEST] to all participants set-timeout-to local_clock + 2δ wait-for (receipt of [vote: vote] messages from all participants) if (all votes are YES) then broadcast (commit, participants) else broadcast (abort, participants) on-timeout broadcast (abort, participants)

11 CMPT 401 Summer 2007 © A. Fedorova Two-Phase Commit Protocol (II) # Executed by all participants (including the coordinator) set-timeout-to Cknow + Dc + δ wait-for (receipt of [VOTE_REQUEST] from coordinator) send [vote: vote] to coordinator if (vote = NO) then decide abort else set-timeout-to Cknow + Dc + 2δ + Db wait-for (delivery of decision message) if(decision message is abort) then decide abort else decide commit on-timeout decide according to termination_protocol() on-timeout decide abort May block the participant

12 CMPT 401 Summer 2007 © A. Fedorova The Need for Termination Protocol If a participant –Voted YES –Sent its decision to coordinator, but…. –Received no final decision from coordinator A termination protocol must be run –Participants cannot simply decide to abort –If they already said they would commit, they cannot change their minds –The coordinator might have sent “commit” decisions to some participants and then crashed –Since those participants might have committed, no other participant can decide “abort” A termination protocol will try to contact other participants to find out what they decided, and try to reach a decision

13 CMPT 401 Summer 2007 © A. Fedorova Blocking Nature of Two-Phase Commit All protocols involving a termination protocol are blocking A termination protocol may lead to a decision, but there are scenarios when it will not: –The coordinator crashes during the broadcast of a decision –Several participants received the decision from coordinator, applied it, and then crashed –All correct (not crashed) participants voted “YES” –Correct participants cannot decide until faulty participants recover –If some faulty participants had decided “NO”, they would have aborted –If some faulty participants had received a “commit” decision from the coordinator, they would have committed

14 CMPT 401 Summer 2007 © A. Fedorova Non-Blocking Atomic Commitment Property 4: Every correct participant that executes the atomic commitment protocol eventually decides Recall that 2PC was blocking because coordinator could broadcast decision to a part of participants and then crash Need a broadcast with stronger properties, such as Uniform Timed Reliable Broadcast (UTRB) Property of UTRB: If any participant (correct or not) receives a broadcast message m, then all correct participants eventually receive m With UTRB, if a participant times out waiting for final decision, it can unilaterally decide to abort The participant knows that if it has not received a message, then no one else has, so no one else could have committed

15 CMPT 401 Summer 2007 © A. Fedorova A Simple UTRB Algorithm procedure broadcast (m, G) // G is the set of participants # Broadcaster executes: send [m] to all processes in G deliver m //local delivery of message m to the process # Process p ≠ broadcaster in G executes: upon (first receipt of [m]) send [m] to all processes in G deliver m Message will not be delivered to local process before it is forwarded to other participants

16 CMPT 401 Summer 2007 © A. Fedorova 2PC in Asynchronous Systems 2PC can be implemented in an asynchronous system with reliable communication channels This means that a message eventually gets delivered… But we cannot set bounds on delivery time So the process might have to wait forever… Therefore, you cannot have non-blocking atomic commitment in an asynchronous system What if a participant whose message is waited on has crashed? The expectation is that the participant will properly recover and continue the protocol So now let’s look at distributed recovery

17 CMPT 401 Summer 2007 © A. Fedorova Distributed Recovery Remember single-site recovery: –transaction log records are kept on stable storage –upon reboot the system “undoes” updates from active on uncommitted transactions –“replays” updates from committed transactions In a distributed system we cannot be sure whether a transaction that was active at the time of crash is: –Still active –Has committed –Has aborted –Maybe it has executed more updates while the recovering site was crashing and rebooting

18 CMPT 401 Summer 2007 © A. Fedorova Components of Distributed Recovery Actions performed by recovering site: –For each transaction that was active before the crash, try to decide unilaterally based on log records –Ask others what they have decided Actions performed by other participants –Others may send the decision –Or a “don’t know” message

19 CMPT 401 Summer 2007 © A. Fedorova Crash Before Local Decision Suppose a site crashes during the execution of transaction, before it reaches local decision (YES or NO) The transaction could have completed at other sites What are the options? Option 1: The crashed site restores its state with help of other participants (restore the updates made while it was crashing and recovering) Option 2: The crashed site realizes that it crashed (by keeping the crash count), and sets local decision to NO

20 CMPT 401 Summer 2007 © A. Fedorova Logging for Distributed Recovery Coordinator: forces “commit” decision to log before informing any participants –Like redo rule for single-site logging Participant: forces its vote (YES or NO) to disk before sending the vote to coordinator – –This way it knows that it must reach decision in agreement with others Participant: forces final decision (received from coordinator) to the log, then responds to the coordinator –Once the coordinator receives responses from all participants, it can remove its own decision log record

21 CMPT 401 Summer 2007 © A. Fedorova Distributed Concurrency Control Multiple servers execute transactions, they share data distributed across sites A lock on data may be requested by many different servers Distributed concurrency control methods: –Centralized two-phase locking (C-2PL) –Distributed two-phase locking (D-2PL) –Optimistic concurrency control

22 CMPT 401 Summer 2007 © A. Fedorova Distributed Concurrency Control: Notation SCH - global lock scheduler Ti – a transaction Oij(X) – an operation requiring a lock on X, executed at server Sj as part of Ti S1, S2, … - servers performing a distributed transaction TM1, TM2, … - transaction managers – one for each server

23 CMPT 401 Summer 2007 © A. Fedorova Centralized 2PL Let S1 be the server to which transaction T i was submitted Let S2 be the server maintaining data X For each operation o i1 (X), TM1 first requests the corresponding lock from SCH (the central 2PL scheduler) Once the lock is granted, the operation is forwarded to the server S2 maintaining X.

24 CMPT 401 Summer 2007 © A. Fedorova Centralized 2PL Read(x) Release locks Read(x) TM1SCH S2 Grant lock Operation executed commit Request lock S1

25 CMPT 401 Summer 2007 © A. Fedorova Distributed 2PL Each server has its own local 2PL scheduler SCH Let S1 be the server to which transaction T i was submitted: –For each operation O i1 (X), TM1 forwards the operation to the TM2 of server S2 maintaining X –The remote site first acquires a lock on X and then submits the execution of the operation.

26 CMPT 401 Summer 2007 © A. Fedorova Distributed 2PL Read(X) Release locks TM1TM2 SCH of S2 Operation executed commit Read(X) S1 S2 Request lock Submit op

27 CMPT 401 Summer 2007 © A. Fedorova Optimistic Concurrency Control Locking is conservative –Locking overhead even if no conflicts –Deadlock detection/resolution (especially problematic in distributed environment) –Lock manager can fail independently Optimistic concurrency control –Perform operation first –Check for conflicts only later (e.g., at commit time) Centralized systems never used optimistic CC Interesting alternative for distributed environment

28 CMPT 401 Summer 2007 © A. Fedorova Optimistic Concurrency Control Working Phase: –If first operation on X, then load last committed version from DB and cache –Otherwise read/write cached version –Keep WriteSet containing objects written –Keep ReadSet containing objects read Validation Phase –Check whether transaction conflicts with other transactions Update Phase –Upon successful validation, cached version of updated objects are written back to DB (= changes are made public) Assumption (for simplicity): –Only one transaction may be in validation phase and update phase at a time (e.g., implemented as critical section)

29 CMPT 401 Summer 2007 © A. Fedorova Backward Validation: Working Phase q Maintain transaction counter Tn, initialize Tn := 0 q Upon begin of transaction Ti (could be first read/write request) I StartTn(Ti) := Tn I WriteSet(Ti) := ReadSet(Ti) := { } q Upon Read(X), Write(X) request of Ti I If first operation on X, load last committed version from DB and cache, I Read/write cached version of X I If Read(X), then ReadSet(Ti) := ReadSet(Ti)  { X } I If Write(X), then WriteSet(Ti) := WriteSet(Ti)  { X }

30 CMPT 401 Summer 2007 © A. Fedorova Backward Validation: Validation and Update Phases q At time of validation of transaction Ti I For all j where StartTn(Ti) < j ≤ Tn l Let WriteSet(Tk) be the write set of transaction Tk with finishTn(Tk) = j l If WriteSet(Tk)  ReadSet(Ti)  { }, then abort Ti q After validation (update) l Tn = Tn + 1 l finishTn(Ti) = Tn l Write WriteSet(Ti) to DB

31 CMPT 401 Summer 2007 © A. Fedorova Optimistic Concurrency Control WorkingValidationUpdate T1 T2 T3 T5 (active) StartTN(T5) finishTN(T1) finishTN(T2) finishTN(T3) Tn = finishTN(T3)

32 CMPT 401 Summer 2007 © A. Fedorova Optimistic Concurrency Control: Discussion Validation and Update Phase in Critical section –Reduced concurrency –Optimizations possible Space overhead: –To validate Ti, must have WriteSets for all Tj where finishTn(Tj) > StartTn(Ti) and Tj was active when Ti began. –Each transaction must record read/write activity No deadlock but potentially more aborts

33 CMPT 401 Summer 2007 © A. Fedorova Distributed Deadlock Resolution Similar remedies as for single-site deadlock resolution: –Prevention (lock ordering) –Avoidance (abort transaction that waits for too long) –Detection (maintain a wait-for graph, abort transactions involved in a cycle) Deadlock avoidance and detection require keeping dependency graphs, or wait-for-graphs (WFGs) WFGs are more difficult to construct in a distributed system (takes more time, must use vector clocks or distributed snapshots) Deadlock managers can fail independently

34 CMPT 401 Summer 2007 © A. Fedorova Summary Atomic commitment –Two-phase commit –Non-blocking implementation possible in a synchronous system with reliable communication channel –Possible in an asynchronous system, but not guaranteed to terminate (blocking) Distributed recovery –Keep state on stable storage –When reboot, ask around to recover the most current state Distributed concurrency control –Centralized lock manager –Distributed lock manager –Optimistic concurrency control