ARIES, Concluded and Distributed Databases: R* Zachary G. Ives University of Pennsylvania CIS 650 – Implementing Data Management Systems October 2, 2008 Some content on 2-phase commit courtesy Ramakrishnan, Gehrke
2 Administrivia Next reading assignment: Principles of Data Integration Chapter 3.3 No review required for this Review Mariposa & R* for Tuesday
ARIES: Basic Data Structures DB Data pages each with a pageLSN Xact Table lastLSN status Dirty Page Table recLSN flushedLSN RAM prevLSN XID type length pageID offset before-image after-image LogRecords LOG master record
Normal Execution of a Transaction Series of reads & writes, followed by commit or abort We will assume that write is atomic on disk In practice, additional details to deal with non-atomic writes Strict 2PL STEAL, NO-FORCE buffer management, with Write- Ahead Logging
Checkpointing Periodically, the DBMS creates a checkpoint Minimizes recovery time in the event of a system crash Write to log: begin_checkpoint record: when checkpoint began end_checkpoint record: current Xact table and dirty page table A “fuzzy checkpoint”: Other Xacts continue to run; so these tables accurate only as of the time of the begin_checkpoint record No attempt to force dirty pages to disk; effectiveness of checkpoint limited by oldest unwritten change to a dirty page. (So it’s a good idea to periodically flush dirty pages to disk!) Store LSN of checkpoint record in a safe place (master record)
Simple Transaction Abort, 1/2 For now, consider an explicit abort of a Xact (No crash involved) We want to “play back” the log in reverse order, UNDO ing updates Get lastLSN of Xact from Xact table Can follow chain of log records backward via the prevLSN field When do we quit? Before starting UNDO, write an Abort log record For recovering from crash during UNDO!
Abort, 2/2 To perform UNDO, must have a lock on data! No problem – no one else can be locking it Before restoring old value of a page, write a CLR: You continue logging while you UNDO!! CLR has one extra field: undoNextLSN Points to the next LSN to undo (i.e. the prevLSN of the record we’re currently undoing). CLRs never Undone (but they might be Redone when repeating history: guarantees Atomicity!) At end of UNDO, write an “end” log record
Transaction Commit Write commit record to log All log records up to Xact’s lastLSN are flushed Guarantees that flushedLSN lastLSN Note that log flushes are sequential, synchronous writes to disk Many log records per log page Commit() returns Write end record to log
Crash Recovery: Big Picture Start from a checkpoint (found via master record) Three phases: 1.Figure out which Xacts committed since checkpoint, which failed (Analysis) 2.REDO all actions –(repeat history) 3.UNDO effects of failed Xacts Oldest log rec. of Xact active at crash Smallest recLSN in dirty page table after Analysis Last chkpt CRASH A RU
Recovery: The Analysis Phase Reconstruct state at checkpoint via end_checkpoint record Scan log forward from checkpoint End record: Remove Xact from Xact table (no longer active) Other records: Add Xact to Xact table, set lastLSN=LSN, change Xact status on commit Update record: If P not in Dirty Page Table, Add P to D.P.T., set its recLSN=LSN
Recovery: The REDO Phase Repeat history to reconstruct state at crash: Reapply all updates (even of aborted Xacts!), redo CLRs Puts us in a state where we know UNDO can do right thing Scan forward from log rec containing smallest recLSN in D.P.T. For each CLR or update log rec LSN, REDO the action unless: Affected page is not in the Dirty Page Table, or Affected page is in D.P.T., but has recLSN > LSN, or pageLSN (in DB) LSN To REDO an action: Reapply logged action Set pageLSN to LSN. Don’t log this!
Recovery: The UNDO Phase ToUndo = { l | l a lastLSN of a “loser” Xact} Repeat: Choose largest LSN among ToUndo If this LSN is a CLR and undoNextLSN==NULL Write an End record for this Xact If this LSN is a CLR and undoNextLSN != NULL Add undoNextLSN to ToUndo Else this LSN is an update Undo the update, write a CLR, add prevLSN to ToUndo Until ToUndo is empty
Example of Recovery begin_checkpoint end_checkpoint update: T1 writes P5 update T2 writes P3 T1 abort CLR: Undo T1 LSN 10 T1 End update: T3 writes P1 update: T2 writes P5 CRASH, RESTART LSN LOG Xact Table lastLSN status Dirty Page Table recLSN flushedLSN ToUndo prevLSNs RAM
Example: Crash During Restart begin_checkpoint, end_checkpoint update: T1 writes P5 update T2 writes P3 T1 abort CLR: Undo T1 LSN 10, T1 End update: T3 writes P1 update: T2 writes P5 CRASH, RESTART CLR: Undo T2 LSN 60 CLR: Undo T3 LSN 50, T3 end CRASH, RESTART CLR: Undo T2 LSN 20, T2 end LSN LOG 00, , ,85 90 Xact Table lastLSN status Dirty Page Table recLSN flushedLSN ToUndo undoNextLSN RAM
Additional Crash Issues What happens if system crashes during Analysis? How do you limit the amount of work in REDO? Flush asynchronously in the background Watch “hot spots”! How do you limit the amount of work in UNDO? Avoid long-running Xacts
Summary of Logging/Recovery Recovery Manager guarantees Atomicity & Durability Use WAL to allow STEAL/NO-FORCE w/o sacrificing correctness LSNs identify log records; linked into backwards chains per transaction (via prevLSN) pageLSN allows comparison of data page and log records
Summary, Continued Checkpointing: A quick way to limit the amount of log to scan on recovery. Recovery works in 3 phases: Analysis: Forward from checkpoint Redo: Forward from oldest recLSN Undo: Backward from end to first LSN of oldest Xact alive at crash Upon Undo, write CLRs Redo “repeats history”: Simplifies the logic!
18 Distributed Databases Goal: provide an abstraction of a single database, but with the data distributed on different sites Pose a new set of challenges: Source tables (or subsets of tables) may be located on different machines There is a data transfer cost – over the network Different CPUs have different amounts of resources Available resources change during optimization- and run-time Today: R*: the first “real” distributed DBMS prototype (Distributed INGRES never actually ran) – focus was a LAN, sites Mariposa: an attempt to distribute across the wide area
19 Issues in Distributed Databases How do we place the data? R*: this is done by humans Mariposa: this is done via economic model What new capabilities do we have? R*: SHIP, 2-phase dependent join, bloomjoin, … Mariposa: ship processing to another node Challenges in optimization R*: more complex cost model, more exec. options Mariposa: bidding on computation and other resources
20 System-R* Optimization Focus: distributed joins 1.Can ship a table and then join it (“ship whole”) 2.Can probe the inner table and return matches (“fetch matches”) Their measurements favored #1 – why? Why do they require: Cardinality of outer < ½ # messages required to ship inner Join cardinality < inner cardinality How can the 2 nd case be improved, in the spirit of block NLJ?
21 Parallelism in Joins They assert it’s better to ship from outer relation to site of inner relation because of potential for parallelism Where? What changes if the inner relation is horizontally partitioned across multiple sites (i.e., relations are “striped”)? They can also exploit the possibility of parallelism in sorting for a merge join – how?
22 Other Joins Ship the inner relation and then index it Two-phase semijoin Very similar to “fetch matches” Take S, T, sort them and remove duplicates Ship to opposite sites, use to fetch tuples that match Merge-join the matching tuples Bloomjoin Generate a Bloom filter from S Send to site of T, find matches Return to S, join
23 Bloom Filters (Bloom 1970) Use k hash functions, m-bit vector For each tuple, hash key k with each function h i Set bit h i (k) in the bit vector Probe the Bloom filter for k’ by testing whether all h i (k’) are set After n values have been isnerted, probability of false positive is (1 – 1/m) kn h 1 (k) h 2 (k) h 3 (k) h 4 (k) m bits
24 Joins – “High-Speed” Network Semijoin R* + Temp Indices R* (Distributed) R* (Local) Bloomjoin
25 R* Optimization Assessment Distributed optimization is hard! They ignore load-balance issues, and messaging overhead is probably ~10%, but still… Shipping costs are difficult to assess, since they depend on precise cardinality results Optimizing the plan locally, and using that as a model for distributed processing, doesn’t provide any optimality guarantees either – doesn’t account for parallelism
Updates in R* Require Two-Phase Commit (2PC) Site at which a transaction originates is the coordinator; other sites at which it executes are subordinates Two rounds of communication, initiated by coordinator: Voting Coordinator sends prepare messages, waits for yes or no votes Then, decision or termination Coordinator sends commit or rollback messages, waits for acks Any site can decide to abort a transaction!
27 Steps in 2PC When a transaction wants to commit: Coordinator sends prepare message to each subordinate Subordinate force-writes an abort or prepare log record and then sends a no (abort) or yes (prepare) message to coordinator Coordinator considers votes: If unanimous yes votes, force-writes a commit log record and sends commit message to all subordinates Else, force-writes abort log rec, and sends abort message Subordinates force-write abort/commit log records based on message they get, then send ack message to coordinator Coordinator writes end log record after getting all acks
28 Illustration of 2PC CoordinatorSubordinate 1Subordinate 2 force-write begin log entry force-write prepared log entry force-write prepared log entry send “prepare” send “yes” force-write commit log entry send “commit” force-write commit log entry force-write commit log entry send “ack” write end log entry
Comments on 2PC Every message reflects a decision by the sender; to ensure that this decision survives failures, it is first recorded in the local log All log records for a transaction contain its ID and the coordinator’s ID The coordinator’s abort/commit record also includes IDs of all subordinates Thm: there exists no distributed commit protocol that can recover without communicating with other processes, in the presence of multiple failures!
What if a Site Fails in the Middle? If we have a commit or abort log record for transaction T, but not an end record, we must redo/undo T If this site is the coordinator for T, keep sending commit/abort msgs to subordinates until acks have been received If we have a prepare log record for transaction T, but not commit/abort, this site is a subordinate for T Repeatedly contact the coordinator to find status of T, then write commit/abort log record; redo/undo T; and write end log record If we don’t have even a prepare log record for T, unilaterally abort and undo T This site may be coordinator! If so, subordinates may send messages and need to also be undone
Blocking for the Coordinator If coordinator for transaction T fails, subordinates who have voted yes cannot decide whether to commit or abort T until coordinator recovers T is blocked Even if all subordinates know each other (extra overhead in prepare msg) they are blocked unless one of them voted no
Link and Remote Site Failures If a remote site does not respond during the commit protocol for transaction T, either because the site failed or the link failed: If the current site is the coordinator for T, should abort T If the current site is a subordinate, and has not yet voted yes, it should abort T If the current site is a subordinate and has voted yes, it is blocked until the coordinator responds!
Observations on 2PC Ack msgs used to let coordinator know when it’s done with a transaction; until it receives all acks, it must keep T in the transaction-pending table If the coordinator fails after sending prepare msgs but before writing commit/abort log recs, when it comes back up it aborts the transaction
34 R* Wrap-Up One of the first systems to address both distributed query processing and distributed updates Focus was on local-area networks, small number of sites Next system, Mariposa, focuses on environments more like the Internet…