Transaction Management, Concurrency Control and Recovery Chapter 22
Overview What are transactions? What is a schedule? What is concurrency control? Why we need concurrency control: Three problems. Serializabiltiy and Concurrency control: Theory: Conflict Serializability View Serializability Practice: Locking Time-stamping Optimistic techniques Recovery facilities
What is a Transaction? Transaction Action, or series of actions, carried out by user or application, which accesses or changes contents of database. Logical unit of work on the database. Transforms database from one consistent state to another, although consistency may be violated during transaction. Example: Read(staffNo, salary) salary=salary * 1.1 write(staffNo , salary)
What is a Transaction? Can have one of two outcomes: Success - transaction commits and database reaches a new consistent state. Failure - transaction aborts, and database must be restored to consistent state before it started (rolled back or undone). Committed transaction cannot be aborted. Aborted transactions that are rolled back can be restarted later.
Properties of Transactions Four basic (ACID) properties of a transaction are: Atomicity ‘All or nothing’ property. Consistency Must transform database from one consistent state to another. Isolation Partial effects of incomplete transactions should not be visible to other transactions. Durability Effects of a committed transaction are permanent and must not be lost because of later failure. We deal with transactions in a schedule.
by transactions (optional) Schedule Data Items affected by transactions (optional) Start with t0 or t1 Running Transactions Time T1 T2 T3 Balance1 Balance2 t0 Begin Transaction 100 200 t1 Read(Balance1) t2 t3 Balance1 += 500 Read(Balance2) t4 Write(Balance1) 600 t5 Commit : Order of execution
Schedule Rules Never start two transactions at the same time. Never perform Reads and Writes of different transactions at the same time. Each transaction should end with a commit or abort (rollback).
Schedule Definitions Schedule Sequence of reads/writes by set of concurrent transactions. Serial Schedule Schedule where operations of each transaction are executed consecutively without any interleaved operations from other transactions. No guarantee that results of all serial executions of a given set of transactions will be identical. (Think of an example) Non-Serial Schedule Schedule where operations from set of concurrent transactions are interleaved
Example of a Serial Schedule Time T1 T2 t0 Begin Transaction t1 Read(Balance1) t2 Balance1 += 500 t3 Commit t4 t5 t6
Example of a non-Serial Schedule Time T1 T2 t0 Begin Transaction t1 t2 Read(Balance1) t3 t4 Commit t5 Balance1 += 500 t6
What is Concurrency Control? Concurrency: transactions running simultaneously. Concurrency Control: Process of managing simultaneous operations (transactions) on the database without having them interfere with one another. Prevents interference when two or more users are accessing database simultaneously and at least one is updating data. Although two transactions may be correct in themselves, interleaving of operations may produce an incorrect result.
Why we Need Concurrency Control? Three examples of potential problems caused by concurrency: Lost update problem. Uncommitted dependency problem. Inconsistent analysis problem.
Lost Update Problem Time T1 T2 balx t1 begin-transaction 100 t2 begin-transaction read(balx) 100 t3 read(balx) balx = balx +100 100 t4 balx = balx -10 write(balx) 200 t5 write(balx) commit 90 t6 commit 90 Successfully completed update is overridden by another user. T1 withdrawing £10 from an account with balx, initially £100. T2 depositing £100 into same account. Serially, final balance would be £190. Loss of T2’s update avoided by preventing T1 from reading balx until after update
Uncommitted Dependency Problem Time T3 T4 balx t1 begin-transaction 100 t2 read(balx) 100 t3 balx = balx +100 100 t4 begin_transaction write(balx) 200 t5 read(balx) : 200 t6 balx = balx -10 rollback 100 t7 write(balx) 190 t8 commit 190 Occurs when one transaction can see intermediate results of another transaction before it has committed. T4 updates balx to £200 but it aborts, so balx should be back at original value of £100. T3 has read new value of balx (£200) and uses value as basis of £10 reduction, giving a new balance of £190, instead of £90. Problem avoided by preventing T3 from reading balx until after T4 commits or aborts.
Inconsistent Analysis Problem Occurs when transaction reads several values but second transaction updates some of them during execution of first. Sometimes referred to as dirty read or unrepeatable read. T6 is totaling balances of account x (£100), account y (£50), and account z (£25). Meantime, T5 has transferred £10 from balx to balz, so T6 now has wrong result (£10 too high).
Inconsistent Analysis Problem Problem avoided by preventing T6 from reading balx and balz until after T5 completed updates.
Serializability Serializability is a property of a schedule: We say serializable schedule and non-serializable schedule. But what makes a schedule serializable? A serializable schedule is a non-serial schedule that allows transactions to execute concurrently without interfering with one another. In other words, a non-serial schedule that is equivalent to some serial schedule. Main goal is to prevent transactions interfering with each other (3 problems discussed earlier).
Conflict Serializability Test View Serializability Test Two types of seriailizability: Conflict. View. Serializability Theory Practice Conflict Serializability Test View Serializability Test Locking Time-stamping Optimistic
Conflict Serializability In serializability, ordering of read/writes is important: (a) If two transactions only read a data item, they do not conflict and order is not important. (b) If two transactions either read or write completely separate data items, they do not conflict and order is not important. (c) If one transaction writes a data item and another reads or writes same data item, order of execution is important. They conflict.
Conflict Serializability Schedule S1 is conflict serializable if it is conflict equivalent to a serial schedule. We test a schedule for conflict serialiazibility using a Precedence Graph.
Testing for Conflict Serializability – Precedence Graph Create: node for each transaction; a directed edge Ti Tj, if Tj reads the value of an item written by Ti; a directed edge Ti Tj, if Tj writes a value into an item after it has been read by Ti. a directed edge Ti Tj, if Tj writes a value into an item after it has been written by Ti. If precedence graph contains cycle, schedule is not conflict serializable.
Test Schedule: Is it conflict serializable? Time T7 T8 t1 begin-transaction t2 read(balx) t3 balx = balx +100 t 4 write(balx) t5 begin_transaction t6 read(balx) t7 balx = balx * 1.1 t8 write(balx) t9 read(baly) t10 baly = baly * 1.1 t11 write(baly) t12 commit t13 read(baly) t14 write(baly) t15 commit
View Serializability Offers less stringent definition of schedule equivalence than conflict serializability. Two schedules S1 and S2 are view equivalent if: For each data item x, if Ti reads initial value of x in S1, Ti must also read initial value of x in S2. For each read on x by Ti in S1, if value read by x is written by Tj, Ti must also read value of x produced by Tj in S2. For each data item x, if last write on x performed by Ti in S1, same transaction must perform final write on x in S2.
View Serializability Schedule is view serializable if it is view equivalent to a serial schedule. Every conflict serializable schedule is view serializable, although converse is not true. It can be shown that any view serializable schedule that is not conflict serializable contains one or more blind writes. All Schedules View Serializable Schedules Conflict Serializable Schedules
View Serializable Schedule Time T7 T8 t1 begin-transaction t2 read(balx) t3 write(balx) t4 read(baly) t5 write(baly) t6 commit t7 begin-transaction t8 read(balx) t9 write(balx) t10 read(baly) t11 write(baly) t12 commit T7 T8 begin-transaction read(balx) write(balx) begin_transaction read(baly) write(baly) commit
View Serializable Schedule Time T11 T12 T13 t1 begin-transaction t2 read(balx) t3 begin_transaction t4 write(balx) t5 commit t6 write(balx) t7 commit t8 begin_transaction t9 write(balx) t10 commit Is this schedule conflict serializable?
Concurrency Control Techniques Serializability Theory Practice Conflict Serializability Test View Serializability Test Locking Time-stamping Optimistic
Concurrency Control Techniques Two basic concurrency control techniques: Locking, Timestamping. Both are conservative approaches: delay transactions in case they conflict with other transactions. Optimistic methods assume conflict is rare and only check for conflicts at commit.
Concurrency Control Techniques Overview Locking Time-stamping Optimistic Basic Rules 2PL Deadlock Prevention Deadlock Detection Basic Time-stamp Ordering Multi-version Time-stamp Ordering Regular Rigorous Thomas’s Write Rule Time outs Wait-Die Wound-Wait Wait-for Graph Strict
Locking Main Idea: Transaction uses locks to deny access to other transactions and so prevent incorrect updates. Most widely used approach to ensure serializability. A transaction must claim: a shared (read) on x before it can read it. or an exclusive (write) lock on x before it can write it. Lock prevents other transactions from reading or writing the locked data item.
Locking – Basic Rules Shared Lock: Exclusive Lock: If transaction has shared lock on item, it can read but not update item. More than one transaction can hold a shared lock on an item. Exclusive Lock: If transaction has exclusive lock on item, can both read and update item. Only one transaction can hold an exclusive lock on an item. Some systems allow transaction to: upgrade read lock to an exclusive lock. downgrade exclusive lock to a shared lock.
Locking -- Commands To acquire a shared (read) lock on X: Read_Lock(x) RLock(X) Shared_Lock(X) SLock(X) To acquire an exclusive (write) lock on X: Write_Lock(X) WLock(X) Exclusive_Lock(X) XLock(X) To release a lock on X: Unlock(X)
Correct use of locks. But is the execution correct? Time T9 T10 t1 begin-transaction t2 write_lock(balx) t3 read(balx) t4 balx = balx + 100 t5 write(balx) t6 unlock(balx) t7 begin_transaction t8 write_lock(balx) t9 read(balx) t10 balx = balx * 1.1 t11 write(balx) t12 unlock(balx) t13 write_lock(baly) t14 read(baly) t15 baly = baly * 1.1 t16 write(baly) t17 commit/unlock(baly) t18 write_lock(baly) t19 read(baly) t20 baly = baly - 100 t21 write(baly) t22 commit/unlock(baly) Correct use of locks. But is the execution correct?
Two-Phase Locking (2PL) We just saw that locking alone doesn’t always work. Solution: 2PL. Transaction follows 2PL protocol if all locking operations precede first unlock operation in the transaction. Two phases for transaction: Growing phase - acquires all locks but cannot release any locks. Shrinking phase - releases locks but cannot acquire any new locks. With 2PL, we can prevent the three problems.
Original Lost Update Problem Time T1 T2 balx t1 begin-transaction 100 t2 begin-transaction read(balx) 100 t3 read(balx) balx = balx +100 100 t4 balx = balx -10 write(balx) 200 t5 write(balx) commit 90 t6 commit 90
Preventing Lost Update Problem Time T1 T2 balx t1 begin-transaction 100 t2 begin_transaction write_lock(balx) 100 t3 write_lock(balx) read(balx) 100 t4 WAIT balx = balx +100 100 t5 WAIT write(balx) 200 t6 WAIT commit/unlock(balx) 200 t7 read(balx) 200 t8 balx = balx -10 200 t9 write(balx) 190 t10 commit/unlock(balx) 190
Original Uncommitted Dependency Problem Time T3 T4 balx t1 begin-transaction 100 t2 read(balx) 100 t3 balx = balx +100 100 t4 begin_transaction write(balx) 200 t5 read(balx) : 200 t6 balx = balx -10 rollback 100 t7 write(balx) 190 t8 commit 190
Preventing Uncommitted Dependency Problem Time T3 T4 balx t1 begin-transaction 100 t2 write_lock(balx) 100 t3 read(balx) 100 t4 begin_transaction balx = balx +100 100 t5 write_lock(balx) write(balx) 200 t6 WAIT commit/unlock(balx) 200 t7 read(balx) 200 t8 balx = balx -10 200 t9 write(balx) 190 t10 commit/unlock(balx) 190
Original Inconsistent Analysis Problem
Preventing Inconsistent Analysis Problem
A Potential Problem with 2PL
Cascading Rollbacks If every transaction in a schedule follows 2PL, schedule is serializable. However, problems can occur with interpretation of when locks can be released. Cascading rollback is undesirable since they potentially lead to the undoing of a significant amount of work To prevent this with 2PL, 2 solutions: Rigorous 2PL: Leave release of all locks until end of transaction. Strict 2PL: Holds only exclusive locks until the end of the transaction. BOTH are still 2PL. So both still have growing and shrinking phases. 2PL still may cause deadlock.
Problems with 2PL Cascading Rollbacks: Dead Locks: Solved with strict or rigorous 2PL. Dead Locks: Happen in regular 2PL, and also in strict and rigorous 2PL. Handled using deadlock detection and prevention techniques.
Deadlocks Deadlock: An impasse that may result when two (or more) transactions are each waiting for locks held by the other to be released. Once a deadlock happens, only one way to break deadlock: abort one or more of the transactions. Deadlock should be transparent to user, so DBMS should restart aborted transaction(s).
Example Deadlock Time T9 t1 begin-transaction t2 write_lock(balx) t3 read(balx) t4 balx = balx - 10 t5 write(balx) t6 write_lock(baly) t7 WAIT t8 WAIT t9 WAIT t10 WAIT t11 : T10 begin-transaction write_lock(baly) read(baly) baly = baly + 100 write(baly) wait_lock(balx) WAIT :
Deadlock Handling Two general techniques for handling deadlock: Deadlock prevention: DBMS doesn’t allow deadlock to happen. Timeouts. Wait-Die. Wound-wait. Deadlock detection and recovery: DBMS allows deadlocks to happens but detects and recovers from them. Wait-for Graphs (WFG).
Timeouts Transaction that requests lock will only wait for a system- defined period of time. If lock has not been granted within this period, lock request times out. DBMS assumes transaction deadlocked, even though it may not be, and it aborts and automatically restarts the transaction.
Timestamps Before we discuss Wait-die and Wound-wait techniques, introduce timestamps. A timestamp is a unique number given to each transaction. Traditionally, it is the time the transaction started. The smaller the timestamp, the older the transaction.
Timestamps TS(T11) = 1 TS(T12) = 3 TS(T13) = 8 Time T11 T12 T13 t1 begin-transaction t2 read(balx) t3 begin_transaction t4 write(balx) t5 commit t6 write(balx) t7 commit t8 begin_transaction t9 write(balx) t10 commit TS(T11) = 1 TS(T12) = 3 TS(T13) = 8
Wait-Die Technique Only an older transaction can wait for younger one, otherwise transaction is aborted (dies) and restarted with same timestamp. (Why the same?) If a transaction Ti requests a lock on an item held by Tj: If Ti > Tj [TS(Ti) < TS(Tj)], Ti waits for Tj to release the lock. If Ti < Tj [TS(Ti) > TS(Tj)], Ti is aborted and restarted with the same TS.
Wound-Wait Technique only a younger transaction can wait for an older one. If older transaction requests lock held by younger one, younger one is aborted (wounded) and restarted with same timestamp. (Why the same?) If a transaction Ti requests a lock on an item held by Tj: If Ti > Tj [TS(Ti) < TS(Tj)], Tj is aborted and Ti gets the lock. If Ti < Tj [TS(Ti) > TS(Tj)], Ti waits for Tj to release the lock.
Deadlock Detection and Recovery Usually handled by construction of wait-for graph (WFG) showing transaction dependencies: Create a node for each transaction. Create edge Ti Tj, if Ti waiting to lock item locked by Tj. Deadlock exists if and only if WFG contains cycle.
Example Schedule with WFG Time T9 t1 begin-transaction t2 write_lock(balx) t3 read(balx) t4 balx = balx - 10 t5 write(balx) t6 write_lock(baly) t7 WAIT t8 WAIT t9 WAIT t10 WAIT t11 : T10 begin-transaction write_lock(baly) read(baly) baly = baly + 100 write(baly) wait_lock(balx) WAIT : T10 T9
Deadlock Detection and Recovery WFG is created at regular intervals. Several issues when recovering from a deadlock: choice of deadlock victim; avoiding starvation. Self-read: pages 593-594
Concurrency Control Techniques Overview Locking Time-stamping Optimistic Basic Rules 2PL Deadlock Prevention Deadlock Detection Basic Time-stamp Ordering Multi-version Time-stamp Ordering Regular Rigorous Thomas’s Write Rule Time outs Wait-Die Wound-Wait Wait-for Graph Strict
Timestamping Main Idea: Transactions ordered globally so that older transactions (smaller timestamps) get priority in the event of conflict. Conflict is resolved by rolling back (aborting) and restarting transaction. No locks so no deadlock. Timestamp A unique identifier created by DBMS that indicates relative starting time of a transaction. Timestamping A concurrency control protocol that orders transactions in such a way that order transactions. Transactions with smaller timestamps, get priority in the event of conflict
Timestamping 2 Techniques: Basic Timestamp Ordering. Thomas’s Write Rule. Multiversion Timestamp Ordering.
Basic Timestamp Ordering Read/write proceeds only if last update on that data item was carried out by an older transaction. Otherwise, transaction requesting read/write is restarted and given a new timestamp. Main Goal: Ordering writes then reads/writes as they would have been ordered in a serial schedule. Timestamps are also set for data items: read-timestamp - timestamp of last transaction to read item; write-timestamp - timestamp of last transaction to write item.
Basic Timestamping – Read(x) Consider a read(x) transaction T with timestamp TS(T): TS(T) < write_timestamp(x) x already updated by younger (later) transaction. Transaction T must be aborted and restarted with a new timestamp. TS(T) write_timestamp(x) execute the read(x) operation of T read_timestamp(x) = TS(T)
Basic Timestamping – Write(x) TS(T) < read_timestamp(x) x already read by younger transaction. Transaction T must be aborted and restarted with a new timestamp. TS(T) < write_timestamp(x) x already written by younger transaction. Otherwise, operation is accepted and executed. Write_timestamp(x) = TS(T)
Basic Timestamp Ordering
Thomas’s Write Rule Provide greater concurrency by rejecting obsolete write operations. When a read(x) is encountered, behave just like in slide 62. When a write(x) is encountered, perform the following check: TS(T) < read_timestamp(x) x already read by younger transaction. Transaction T must be aborted and restarted with a new timestamp. TS(T) < write_timestamp(x) x already written by younger transaction. Ignores the write operation (ignore obsolete write rule) Otherwise, operation is accepted and executed. Write_timestamp(x) = TS(T)
Comparison of Methods All Schedules View Serializable Schedules Conflict Serializable Schedules 2PL TS
Multiversion Timestamp Ordering Main Idea: Versioning of data can be used to increase concurrency so create multiple versions of each data item. Basic timestamp assumes only one version of data item exists, and so only one transaction can access data item at a time. Multiversion allows multiple transactions to read and write different versions of same data item. Multiversion ensures each transaction sees consistent set of versions for all data items it accesses. In multiversion: Each write operation creates new version of data item while retaining old version. When transaction attempts to read data item, system selects one version that ensures serializability NO ABORTS ON READs Each version has a read and a write timestamp. Versions can be deleted once they are no longer required.
Multiversion Timestamping – Read(x) When a transaction T wishes to read x, we find the correct version and let it read it. The correct version, xi, is the latest version written by an older transaction: TS(T) write_timestamp(xi) After xi is found and read by T, we need to record that xi was read by T: read_timestamp(xi) = max(read_timestamp(xi), TS(T)) Absolutely no aborts on read.
Multiversion Timestamping – Write(x) When a transaction T wishes to write x, we need to perform a test first then write a new version. Test: we need make sure that there is no older version of x that has been read by a transaction younger than T The transaction that is younger than T should read T’s version, not this older version.
Multiversion Timestamping – Write(x) Find the correct version, xi: is the latest version written by an older transaction: TS(T) write_timestamp(xi) Test it: make sure that no younger transaction has already read xi: TS(T) < read_timestamp(xi)? If yes: Abort T. If no: create a new version xj of x: read_timestamp(xj) = write_timestamp(xj) = TS(T)
Time T1 T2 T3 T4 T5 Begin 1 2 Read(x) 3 x=x+20 4 Write(x) 5 6 7 8 9 Read(y) 10 11 12 13 y=y/2 14 Write(y) 15 y=y+x-100 16 17 x=x+0.10x 18 19 Commit 20 21 22 23
Concurrency Control Techniques Overview Locking Time-stamping Optimistic Basic Rules 2PL Deadlock Prevention Deadlock Detection Basic Time-stamp Ordering Multi-version Time-stamp Ordering Regular Rigorous Thomas’s Write Rule Time outs Wait-Die Wound-Wait Wait-for Graph Strict
Optimistic Techniques Main Idea: conflict is rare and it is more efficient to let transactions proceed without delays to ensure serializability. At commit, check is made to determine whether conflict has occurred. If there is a conflict, transaction must be rolled back and restarted. Potentially allows greater concurrency than traditional protocols.
Database Recovery Process of restoring database to a correct state in the event of a failure. Transactions represent basic unit of recovery. Recovery manager responsible for atomicity and durability. If failure occurs between commit and database buffers being flushed to secondary storage then, to ensure durability, recovery manager has to redo (rollforward) transaction’s updates. If transaction had not committed at failure time, recovery manager has to undo (rollback) any effects of that transaction for atomicity. Partial undo - only one transaction has to be undone. Global undo - all transactions have to be undone.
Example T1 T2 T3 T4 T5 T6 t0 tc tf DBMS starts at time t0, but fails at time tf. Assume data for transactions T2 and T3 have been written to secondary storage. T1 and T6 have to be undone. In absence of any other information, recovery manager has to redo T2, T3, T4, and T5.
Recovery Facilities DBMS should provide following facilities to assist with recovery: Backup mechanism: which makes periodic backup copies of database. Logging facilities: which keep track of current state of transactions and database changes. Checkpoint facility: which enables updates to database in progress to be made permanent. Recovery manager: which allows DBMS to restore database to consistent state following a failure.
2. Logging Facilities: The Log File Contains information about all updates to database: Transaction records. Checkpoint records. Often used for other purposes (for example, auditing).
3. Checkpoint Facility Checkpoint Point of synchronization between database and log file. All buffers are written to secondary storage. A checkpoint consists of the following actions: Suspend execution of transactions temporarily. Write all updated buffers to disk. Write a checkpoint log record to disk. Resume executing transactions. When failure occurs, the recovery manager performs the following: Redo all transactions that committed since the checkpoint. Undo all transactions active at time of crash (if using immediate update).
Example T1 and T6: undo. T2 and T3: do nothing. T4 and T5: redo. T1 T2 t0 tc tf T1 and T6: undo. T2 and T3: do nothing. T4 and T5: redo.
Checkpoint in Log File
Main Recovery Techniques Three main recovery techniques: Deferred Update. Immediate Update.
Deferred Update Updates are not written to the database until after a transaction has reached its commit point. Start from the last checkpoint: If a transaction has committed before checkpoint Do nothing. If a transaction has committed after checkpoint Redo it. If a transaction has not committed after checkpoint Do nothing.
Immediate Update Updates are applied to database as they occur. Start from the last checkpoint: If a transaction has committed before checkpoint Do nothing. If a transaction has committed after checkpoint Redo it. If a transaction has not committed after checkpoint Undo it. Undo is done in reverse order: from bottom of log file to top. Redo is done in order: from top of log file to bottom.
Example Using Deferred Update: Which transactions to undo? Redo? Using Immediate Update: Which transactions to undo? Redo?