1. Introduction CSEP 545 Transaction Processing Philip A. Bernstein

Slides:



Advertisements
Similar presentations
Database Systems (資料庫系統)
Advertisements

1 Concurrency Control Chapter Conflict Serializable Schedules  Two actions are in conflict if  they operate on the same DB item,  they belong.
1 Lecture 11: Transactions: Concurrency. 2 Overview Transactions Concurrency Control Locking Transactions in SQL.
Transaction Management: Concurrency Control CS634 Class 17, Apr 7, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
TRANSACTION PROCESSING SYSTEM ROHIT KHOKHER. TRANSACTION RECOVERY TRANSACTION RECOVERY TRANSACTION STATES SERIALIZABILITY CONFLICT SERIALIZABILITY VIEW.
Database Management Systems 3ed, R. Ramakrishnan and J. Gehrke1 Concurrency Control Chapter 17 Sections
Lock-Based Concurrency Control
(c) Oded Shmueli Transactions Lecture 1: Introduction (Chapter 1, BHG) Modeling DB Systems.
Concurrency Control and Recovery In real life: users access the database concurrently, and systems crash. Concurrent access to the database also improves.
1 ICS 214B: Transaction Processing and Distributed Data Management Replication Techniques.
Concurrency. Busy, busy, busy... In production environments, it is unlikely that we can limit our system to just one user at a time. – Consequently, it.
Transaction Processing: Concurrency and Serializability 10/4/05.
Concurrency. Correctness Principle A transaction is atomic -- all or none property. If it executes partly, an invalid state is likely to result. A transaction,
Database Management Systems I Alex Coman, Winter 2006
1/11/ Atomicity & Durability Using Shadow Paging CSEP 545 Transaction Processing for E-Commerce Philip A. Bernstein Copyright ©2012 Philip A.
4/3/ Atomicity & Durability Using Shadow Paging CSEP 545 Transaction Processing for E-Commerce Philip A. Bernstein Copyright ©2003 Philip A. Bernstein.
TRANSACTIONS. Objectives Transaction Concept Transaction State Concurrent Executions Serializability Recoverability Implementation of Isolation Transaction.
1/2/ Introduction CSE 593 Transaction Processing Philip A. Bernstein.
Concurrency Server accesses data on behalf of client – series of operations is a transaction – transactions are atomic Several clients may invoke transactions.
1 Concurrency Control II: Locking and Isolation Levels.
Transactions. Transaction: Informal Definition A transaction is a piece of code that accesses a shared database such that each transaction accesses shared.
4/3/ Concurrency Control for Transactions Part One CSEP 545 Transaction Processing Philip A. Bernstein Copyright ©2003 Philip A. Bernstein.
4/1/ Atomicity & Durability Using Shadow Paging CSEP 545 Transaction Processing for E-Commerce Philip A. Bernstein Copyright ©2007 Philip A. Bernstein.
1/11/ Concurrency Control for Transactions Part One CSEP 545 Transaction Processing Philip A. Bernstein Copyright ©2012 Philip A. Bernstein.
1 Database Systems ( 資料庫系統 ) December 27, 2004 Chapter 17 By Hao-hua Chu ( 朱浩華 )
1 CSE232A: Database System Principles More Concurrency Control and Transaction Processing.
Multidatabase Transaction Management COP5711. Multidatabase Transaction Management Outline Review - Transaction Processing Multidatabase Transaction Management.
SHUJAZ IBRAHIM CHAYLASY GNOPHANXAY FIT, KMUTNB JANUARY 05, 2010 Distributed Database Systems | Dr.Nawaporn Wisitpongphan | KMUTNB Based on article by :
Distributed Transactions What is a transaction? (A sequence of server operations that must be carried out atomically ) ACID properties - what are these.
1 Concurrency Control. 2 Why Have Concurrent Processes? v Better transaction throughput, response time v Done via better utilization of resources: –While.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
6. Application Server Issues for the Project
Database Recovery Techniques
Transactions.
Database Transaction Abstraction I
Transaction Management
Transaction Management and Concurrency Control
CS422 Principles of Database Systems Concurrency Control
Database Management System
Concurrency Control.
Part- A Transaction Management
Transaction Management
CSIS 7102 Spring 2004 Lecture 2 : Serializability
Transactions.
CIS 720 Concurrency Control.
Transaction Management
Distributed Transactions
3. Concurrency Control for Transactions Part One
Chapter 10 Transaction Management and Concurrency Control
Concurrency Control Chapter 17
Database Transactions
Outline Introduction Background Distributed DBMS Architecture
Shadow Paging CSE593 Transaction Processing Philip A. Bernstein
EEC 688/788 Secure and Dependable Computing
Chapter 15 : Concurrency Control
Distributed Database Management Systems
EEC 688/788 Secure and Dependable Computing
Introduction of Week 13 Return assignment 11-1 and 3-1-5
Distributed Transactions
Concurrency Control Chapter 17
Lecture 20: Intro to Transactions & Logging II
Distributed Databases Recovery
Transaction Management
CPSC-608 Database Systems
EEC 688/788 Secure and Dependable Computing
UNIT -IV Transaction.
Database Systems (資料庫系統)
Database Systems (資料庫系統)
Transactions, Properties of Transactions
Presentation transcript:

1. Introduction CSEP 545 Transaction Processing Philip A. Bernstein Copyright ©2003 Philip A. Bernstein

Outline 1. The Basics 2. ACID Properties 3. Atomicity and Two-Phase Commit 4. Performance 5. Styles of System

1.3 Atomicity and Two-Phase Commit Distributed systems make atomicity harder Suppose a transaction updates data managed by two DB systems. One DB system could commit the transaction, but a failure could prevent the other system from committing. The solution is the two-phase commit protocol. Abstract “DB system” by resource manager (could be a SQL DBMS, message mgr, queue mgr, OO DBMS, etc.)

Two-Phase Commit Main idea - all resource managers (RMs) save a durable copy of the transaction’s updates before any of them commit. If one RM fails after another commits, the failed RM can still commit after it recovers. The protocol to commit transaction T Phase 1 - T’s coordinator asks all participant RMs to “prepare the transaction”. Participant RMs replies “prepared” after T’s updates are durable. Phase 2 - After receiving “prepared” from all participant RMs, the coordinator tells all participant RMs to commit.

Two-Phase Commit System Architecture Application Program Read, Write Start Commit, Abort Other Transaction Managers Resource Manager Transaction Manager (TM) 1. Start transaction returns a unique transaction identifier 2. Resource accesses include the transaction identifier. For each transaction, RM registers with TM 3. When application asks TM to commit, the TM runs two-phase commit.

1.4 Performance Requirements Measured in max transaction per second (tps) or per minute (tpm), and dollars per tps or tpm. Dollars measured by list purchase price plus 5 year vendor maintenance (“cost of ownership”) Workload typically has this profile: 10% application server plus application 30% communications system (not counting presentation) 50% DB system TP Performance Council (TPC) sets standards http://www.tpc.org. TPC A & B (‘89-’95), now TPC C &W

TPC-A/B — Bank Tellers Obsolete (a retired standard), but interesting Input is 100 byte message requesting deposit/withdrawal Database tables = {Accounts, Tellers, Branches, History} Start Read message from terminal (100 bytes) Read+write account record (random access) Write history record (sequential access) Read+write teller record (random access) Read+write branch record (random access) Write message to terminal (200 bytes) Commit End of history and branch records are bottlenecks

The TPC-C Order-Entry Benchmark TPC-C uses heavier weight transactions

TPC-C Transactions New-Order Get records describing a warehouse, customer, & district Update the district Increment next available order number Insert record into Order and New-Order tables For 5-15 items, get Item record, get/update Stock record Insert Order-Line Record Payment, Order-Status, Delivery, Stock-Level have similar complexity, with different frequencies tpmC = number of New-Order transaction per min.

Comments on TPC-C Enables apples-to-apples comparison of TP systems Does not predict how your application will run, or how much hardware you will need, or which system will work best on your workload Not all vendors optimize for TPC-C. IBM has claimed DB2 is optimized for a different workload, so they only started publishing TPC numbers a few years ago.

Typical TPC-C Numbers $3 - $50 / tpmC. Most are under $20 / tpmC. Top 24 price/performance results on MS SQL Server & Windows. One of the top 56 is Oracle, Linux, BEA Tuxedo System cost $36K (Dell) - $12M (Fujitsu) Examples of high throughput HP ProLiant cluster, 709K tpmC, $10.6M, $15/tpmC (MS SQL, MS COM+) IBM 428K tpmC, $7.6M, $18/tpmC (Oracle, Websphere) Examples of low cost (all use MS SQL Server, COM+) HP ProLiant cluster, 411K tpmC, $5.3M, $13/tpmC Dell, 16.7K tpmC, $47K, $3/tpmC Results are very sensitive to date published.

TPC-W – Web Retailer Introduced 12/99. Features - dynamic web page generation, multiple browser sessions, secure UI & payments (via secure socket layer) Profiles - shop (WIPS), browse (WIPSb), order (WIPSo) Tables – {Customer, Order, Order-Line, Item, Author, CreditCardTxns, Address, Country} Transactions – HomeWeb, ShoppingCart, AdminRequest, AdminConfirm, CustomerRegister, BuyRequest, BuyConfirm, OrderInquiry, OrderDisplay, Search, SearchResult, NewProducts, BestSellers, ProductDetail,

TPC-W (cont’d) Scale factor: 1K – 10M items (in the catalog). Web Interactions per sec (WIPS) @ ScaleFactor IBM: 21K WIPS@10K items; $33 / WIPS; $690K total Dell: 8K WIPS@10K items; $25 / WIPS; $190K total

1.5 Styles of Systems TP is System Engineering Compare TP to other kinds of system engineering … Batch processing - Submit a job and receive file output. Time sharing - Invoke programs in a process, which may interact with the process’s display Real time - Submit requests that have a deadline Client/server - PC calls a server over a network to access files or run applications Decision support - Submit queries to a shared database, and process the result with desktop tools TP - Submit a request to run a transaction

TP vs. Batch Processing (BP) A BP application is usually uniprogrammed so serializability is trivial. TP is multiprogrammed. BP performance is measured by throughput. TP is also measured by response time. BP can optimize by sorting transactions by the file key. TP must handle random transaction arrivals. BP produces new output file. To recover, re-run the app. BP has fixed and predictable load, unlike TP. But, where there is TP, there is almost always BP too. TP gathers the input. BP post-processes work that has weak response time requirements So, TP systems must also do BP well.

TP vs. Timesharing (TS) TS is a utility with highly unpredictable load. Different programs run each day, exercising features in new combinations. By comparison, TP is highly regular. TS has less stringent availability and atomicity requirements. Downtime isn’t as expensive.

TP vs. Real Time (RT) RT has more stringent response time requirements. It may control a physical process. RT deals with more specialized devices. RT doesn’t need or use a transaction abstraction usually loose about atomicity and serializability In RT, response time goals are usually more important than completeness or correctness. In TP, correctness is paramount.

TP and Client/Server (C/S) Is commonly used for TP, where client prepares requests and server runs transactions In a sense, TP systems were the first C/S systems, where the client was a terminal

TP and Decision Support Systems (DSSs) DSSs run long queries, usually with lower data integrity requirements than TP. A.k.a. data warehouse (DSS is the more generic term.) TP systems provide the raw data for DSSs.

Outline 1. The Basics 2. ACID Properties 3. Atomicity and Two-Phase Commit 4. Performance 5. Styles of System

What’s Next? This chapter covered TP system structure and properties of transactions and TP systems The rest of the course drills deeply into each of these areas, one by one.

2. Atomicity & Durability Using Shadow Paging CSEP 545 Transaction Processing for E-Commerce Philip A. Bernstein Copyright ©2003 Philip A. Bernstein

Introduction To get started on the Java-C# project, you need to implement atomicity and durability in a centralized resource manager (i.e. a database). Recommend approach is shadowing. This section provides a quick introduction. A more thorough explanation of the overall topic of database recovery will be presented in a couple of weeks.

Review of Atomicity & Durability Atomicity - a transaction is all-or-nothing Durability - results of a committed transaction will survive failures Problem The only hardware operation that is atomic with respect to failure and whose result is durable is “write one disk block” But the database doesn’t fit on one disk block!

Shadowing in a Nutshell The database is a tree whose root is a single disk block There are two copies of the tree, the master and shadow The root points to the master copy Updates are applied to a shadow copy To install the updates, overwrite the root so it points to the shadow, thereby swapping the master and shadow Before writing the root, none of the transaction’s updates are part of the disk-resident database After writing the root, all of the transaction’s updates are part of the disk-resident database Which means the transaction is atomic and durable

More Specifically … The database consists of a set of files Each file consists of a page table P and a set of pages that P points to. A master page points to each file’s master page table. Assume no concurrency. I.e., one transaction runs at any given time. Assume the transaction has a private shadow copy of each page table.

Initial State of Files a and b Pt1[a] 1 2 3 ... P1a Initial State PtT[a] 1 2 3 ... D I S K P2a Main Memory For T Master a b Pt1[b] 1 2 3 ... P1b PtT[b] 1 2 3 ... P2b

To Write a Page Pi Transaction writes a shadow copy of page Pi to disk Transaction updates its page table to point to the shadow copy of Pi Transaction marks Pi’s entry in the page table (to remember which pages were updated)

D I S K After Writing Page P2b  Pt1[a] 1 2 3 ... P1a Initial State PtT[a] 1 2 3 ... D I S K P2a Main Memory For T Master a b Pt1[b] 1 2 3 ... P1b PtT[b] 1 2 3 ...  P2b Old P2b New

D I S K After Writing Page P1a   Pt1[a] 1 2 3 ... P1a Old P1a New Initial State PtT[a] 1 2 3 ... D I S K  P2a Main Memory For T Master a b Pt1[b] 1 2 3 ... P1b PtT[b] 1 2 3 ...  P2b Old P2b New

What if the System Fails? Main memory is lost The current transaction is effectively aborted But the database is still consistent

D I S K To Commit 1. First copy PtT[a] and PtT[b] to disk Pt1[a] 1 2 3 ... P1a Old P1a New Initial State PtT[a] 1 2 3 ... D I S K P2a Master a b Pt1[b] 1 2 3 ... P1b PtT[b] 1 2 3 ... P2b Old P2b New

D I S K To Commit (cont’d) 2. Then overwrite Master to point to the new Pt’s. Pt1[a] 1 2 3 ... P1a Old P1a New Initial State PtT[a] 1 2 3 ... D I S K P2a Master a b Pt1[b] 1 2 3 ... P1b PtT[b] 1 2 3 ... P2b New P2b Old

Shadow Paging with Shared Files What if two transactions update different pages of a file? If they share their main memory copy of the page table, then committing one will commit the other’s updates too! One solution: File-grained locking (but poor concurrency) Better solution: use a private copy of page table, per transaction. To commit T, within a critical section: get a private copy of the last committed value of the page table of each file modified by T update their entries for pages modified by T store the updated page tables on disk write a new master record, which installs just T’s updates

Managing Available Disk Space Treat the list of available pages like another file The master record points to the master list When a transaction allocates a page, update its shadow list When a transaction commits, write a shadow copy of the list to disk Commiting the transaction swaps the master list and the shadow

Final Remarks Don’t need to write shadow pages to disk until the transaction is ready to commit Saves disk writes if a transaction writes a page multiple times Main benefit is that doesn’t require much code Used in the Gemstone OO DBMS. Not good for TPC benchmarks count disk updates per transaction how to do record level locking?

References P. A. Bernstein, V. Hadzilacos, N. Goodman, Concurrency Control and Recovery in Database Systems, Chapter 6, Section 7 (pp. 201-204) The book is downloadable from http://research.microsoft.com/pubs/ccontrol/ Originally proposed by Raymond Lorie in “Physical Integrity in a Large Segmented Database”ACM Transactions on Database Systems, March 1977.

3. Concurrency Control for Transactions Part One CSEP 545 Transaction Processing Philip A. Bernstein Copyright ©2003 Philip A. Bernstein

Outline 1. A Simple System Model 2. Serializability Theory 3. Synchronization Requirements for Recoverability 4. Two-Phase Locking 5. Preserving Transaction Handshakes 6. Implementing Two-Phase Locking 7. Deadlocks

3.1 A Simple System Model Goal - Ensure serializable (SR) executions Implementation technique - Delay operations that would lead to non-SR results (e.g. set locks on shared data) For good performance minimize overhead and delay from synchronization operations First, we’ll study how to get correct (SR) results Then, we’ll study performance implications (mostly in Part Two)

Assumption - Atomic Operations We will synchronize Reads and Writes. We must therefore assume they’re atomic else we’d have to synchronize the finer-grained operations that implement Read and Write Read(x) - returns the current value of x in the DB Write(x, val) overwrites all of x (the whole page) This assumption of atomic operations is what allows us to abstract executions as sequences of reads and writes (without loss of information). Otherwise, what would wk[x] ri[x] mean? Also, commit (ci) and abort (ai) are atomic

System Model Transaction 1 Transaction 2 Transaction N Start, Commit, Abort Read(x), Write(x) Data Manager Database

3.2 Serializability Theory The theory is based on modeling executions as histories, such as H1 = r1[x] r2[x] w1[x] c1 w2[y] c2 First, characterize a concurrency control algorithm by the properties of histories it allows. Then prove that any history having these properties is SR Why bother? It helps you understand why concurrency control algorithms work.

Equivalence of Histories Two operations conflict if their execution order affects their return values or the DB state. a read and write on the same data item conflict two writes on the same data item conflict two reads (on the same data item) do not conflict Two histories are equivalent if they have the same operations and conflicting operations are in the same order in both histories because only the relative order of conflicting operations can affect the result of the histories

Examples of Equivalence The following histories are equivalent H1 = r1[x] r2[x] w1[x] c1 w2[y] c2 H2 = r2[x] r1[x] w1[x] c1 w2[y] c2 H3 = r2[x] r1[x] w2[y] c2 w1[x] c1 H4 = r2[x] w2[y] c2 r1[x] w1[x] c1 But none of them are equivalent to H5 = r1[x] w1[x] r2[x] c1 w2[y] c2 because r2[x] and w1[x] conflict and r2[x] precedes w1[x] in H1 - H4, but w1[x] precedes r2[x] in H5.

Serializable Histories A history is serializable if it is equivalent to a serial history For example, H1 = r1[x] r2[x] w1[x] c1 w2[y] c2 is equivalent to H4 = r2[x] w2[y] c2 r1[x] w1[x] c1 (r2[x] and w1[x] are in the same order in H1 and H4.) Therefore, H1 is serializable.

Another Example H6 = r1[x] r2[x] w1[x] r3[x] w2[y] w3[x] c3 w1[y] c1 c2 is equivalent to a serial execution of T2 T1 T3, H7 = r2[x] w2[y] c2 r1[x] w1[x] w1[y] c1 r3[x] w3[x] c3 Each conflict implies a constraint on any equivalent serial history: H6 = r1[x] r2[x] w1[x] r3[x] w2[y] w3[x] c3 w1[y] c1 c2 T2T3 T2T1 T1T3 T2T1

Serialization Graphs A serialization graph, SG(H), for history H tells the effective execution order of transactions in H. Given history H, SG(H) is a directed graph whose nodes are the committed transactions and whose edges are all Ti  Tk such that at least one of Ti’s operations precedes and conflicts with at least one of Tk’s operations H6 = r1[x] r2[x] w1[x] r3[x] w2[y] w3[x] c3 w1[y] c1 c2 SG(H6) = T2 T1 T3

The Serializability Theorem A history is SR if and only if SG(H) is acyclic. Proof: (if) SG(H) is acyclic. So let Hs be a serial history consistent with SG(H). Each pair of conflicting ops in H induces an edge in SG(H). Since conflicting ops in Hs and H are in the same order, HsH, so H is SR. (only if) H is SR. Let Hs be a serial history equivalent to H. Claim that if Ti  Tk in SG(H), then Ti precedes Tk in Hs (else HsH). If SG(H) had a cycle, T1T2…TnT1, then T1 precedes T1 in Hs, a contradiction. So SG(H) is acyclic.

How to Use the Serializability Theorem Characterize the set of histories that a concurrency control algorithm allows Prove that any such history must have an acyclic serialization graph. Therefore, the algorithm guarantees SR executions. We’ll use this soon to prove that locking produces serializable executions.

3.3 Synchronization Requirements for Recoverability In addition to guaranteeing serializability, synchronization is needed to implement abort easily. When a transaction T aborts, the data manager wipes out all of T’s effects, including undoing T’s writes that were applied to the DB, and aborting transactions that read values written by T (these are called cascading aborts) Example - w1[x] r2[x] w2[y] to abort T1, we must undo w1[x] and abort T2 (a cascading abort)

Recoverability If Tk reads from Ti and Ti aborts, then Tk must abort Example - w1[x] r2[x] a1 implies T2 must abort But what if Tk already committed? We’d be stuck. Example - w1[x] r2[x] c2 a1 T2 can’t abort after it commits Executions must be recoverable: A transaction T’s commit operation must follow the commit of every transaction from which T read. Recoverable - w1[x] r2[x] c1 c2 Not recoverable - w1[x] r2[x] c2 a1 Recoverability requires synchronizing operations.

Avoiding Cascading Aborts Cascading aborts are worth avoiding to avoid complex bookkeeping, and avoid an uncontrolled number of forced aborts To avoid cascading aborts, a data manager should ensure transactions only read committed data Example avoids cascading aborts: w1[x] c1 r2[x] allows cascading aborts: w1[x] r2[x] a1 A system that avoids cascading aborts also guarantees recoverability.

Strictness It’s convenient to undo a write, w[x], by restoring its before image (=the value of x before w[x] executed) Example - w1[x,1] writes the value “1” into x. w1[x,1] w1[y,3] c1 w2[y,1] r2[x] a2 abort T2 by restoring the before image of w2[y,1], = 3 But this isn’t always possible. For example, consider w1[x,2] w2[x,3] a1 a2 a1 & a2 can’t be implemented by restoring before images notice that w1[x,2] w2[x,3] a2 a1 would be OK A system is strict if it only reads or overwrites committed data.

Strictness (cont’d) More precisely, a system is strict if it only executes ri[x] or wi[x] if all previous transactions that wrote x committed or aborted. Examples (“…” marks a non-strict prefix) strict: w1[x] c1 w2[x] a2 not strict: w1[x] w2[x] … a1 a2 strict: w1[x] w1[y] c1 w2[y] r2[x] a2 not strict: w1[x] w1[y] w2[y] a1 r2[x] a2 “Strict” implies “avoids cascading aborts.”

3.4 Two-Phase Locking Basic locking - Each transaction sets a lock on each data item before accessing the data the lock is a reservation there are read locks and write locks if one transaction has a write lock on x, then no other transaction can have any lock on x Example rli[x], rui[x], wli[x], wui[x] denote lock/unlock operations wl1[x] w1[x] rl2[x] r2[x] is impossible wl1[x] w1[x] wu1[x] rl2[x] r2[x] is OK

Basic Locking Isn’t Enough Basic locking doesn’t guarantee serializability rl1[x] r1[x] ru1[x] wl1[y] w1[y] wu1[y] c1 rl2[y] r2[y] wl2[x] w2[x] ru2[y] wu2[x] c2 Eliminating the lock operations, we have r1[x] r2[y] w2[x] c2 w1[y] c1 which isn’t SR The problem is that locks aren’t being released properly.

Two-Phase Locking (2PL) Protocol A transaction is two-phase locked if: before reading x, it sets a read lock on x before writing x, it sets a write lock on x it holds each lock until after it executes the corresponding operation after its first unlock operation, it requests no new locks Each transaction sets locks during a growing phase and releases them during a shrinking phase. Example - on the previous page T2 is two-phase locked, but not T1 since ru1[x] < wl1[y] use “<” for “precedes”

Proof: Define Ti  Tk if either 2PL Theorem: If all transactions in an execution are two-phase locked, then the execution is SR. Proof: Define Ti  Tk if either Ti read x and Tk later wrote x, or Ti wrote x and Tk later read or wrote x If Ti  Tk, then Ti released a lock before Tk obtained some lock. If Ti  Tk  Tm, then Ti released a lock before Tm obtained some lock (because Tk is two-phase). If Ti ...  Ti, then Ti released a lock before Ti obtained some lock, breaking the 2-phase rule. So there cannot be a cycle. By the Serializability Theorem, the execution is SR.

2PL and Recoverability 2PL does not guarantee recoverability This non-recoverable execution is 2-phase locked wl1[x] w1[x] wu1[x] rl2[x] r2[x] c2 … c1 hence, it is not strict and allows cascading aborts However, holding write locks until after commit or abort guarantees strictness and hence avoids cascading aborts and is recoverable In the above example, T1 must commit before it’s first unlock-write (wu1): wl1[x] w1[x] c1 wu1[x] rl2[x] r2[x] c2

Automating Locking 2PL can be hidden from the application When a data manager gets a Read or Write operation from a transaction, it sets a read or write lock. How does the data manager know it’s safe to release locks (and be two-phase)? Ordinarily, the data manager holds a transaction’s locks until it commits or aborts. A data manager can release read locks after it receives commit releases write locks only after processing commit, to ensure strictness