Download presentation
Presentation is loading. Please wait.
Published byKerry Hood Modified over 8 years ago
1
Skippy: Enabling Long-Lived Snapshots of the Long-Lived Past Ross Shaull rshaull@cs.brandeis.edu Liuba Shrira liuba@cs.brandeis.edu Hao Xu hxu@cs.brandeis.edu Lab for Experimental Software Systems Brandeis University Motivation Each snapshot needs its own page table (an SPT) which points to current-state and COW’d pages Accessing Snapshots with Maplog Indexing Split Snapshots Updating SPTs on disk would be costly, since one COW may change the pointers in multiple SPTs P1P1 P3P3 P1P1 P1P1 P2P2 P3P3 SPT 2 P1P1 P2P2 DatabaseSnapshot pages P1P1 P2P2 P3P3 SPT 1 P2P2 Order of Events: 1.Snapshot 1 declared 2.Page 1 modified 3.Snapshot 2 declared 4.Page 1 modified again 5.Page 2 modified Instead of maintaining many SPTs, append mappings to snapshot pages into a log, the maplog Ordering invariant: All mappings retained for snapshot X are written into maplog before mappings retained for snapshot X+1 Construct SPT for snapshot X by scanning for first-encountered mappings (FEMs) Any page for which a mapping is not found in maplog is still “in the database” (i.e., has not been COW’d yet) P1P1 P1P1 P2P2 Skew hurts Cost of Scan Combat Skew with Skippy For faster scan, create higher-level logs of FEMs with fewer repeated mappings P1P1 P1P1 P2P2 P1P1 P1P1 P1P1 P2P2 P1P1 P3P3 P3P3 P1P1 P2P2 P1P1 P2P2 P1P1 P3P3 P3P3 Snap 1Snap 2Snap 3Snap 4Snap 5Snap 6 Solid arrows denote pointers Dotted arrows indicate copying Skippy Level 1 Maplog Start Divide Maplog into equal- sized chunks called nodes Copy each FEM in a Maplog node into Skippy Level 1 At the end of each node record an up-link that points to the next position in Skippy Level 1 where a mapping will be stored To construct Skippy Level N, recursively apply the same procedure to the previous Skippy Level When scanning, follow up- links to Skippy Levels (a Skippy scan) Analysis Experimental Evaluation Skippy-based Snapshot System Implemented in Berkeley DB (BDB) Page-level snapshots created using COW Page cache is augmented to intercept requests for pages Pages which are write locked get COW’d on the first request after declaration of a snapshot Pages are flushed periodically from a background thread to a second disk Snapshot metadata is created in-memory at transaction commit Metadata (and any unflushed pages COW’d before the checkpoint) are written to disk during checkpoint Recovery of snapshot pages and metadata can be done in one log pass More Stuff Skew# Skippy LevelsTime to Build SPT (s) 50/50013.8 80/20019.0 115.8 214.7 313.9 99/1033.3 16.69 Expected cost to build SPT Calculate expected cost of building SPT by factoring in: acceleration cost to read sequentially at each level cost of disk seeks between each level Plot shows time to build SPT versus the number of Skippy levels for various skews Setup: 100M database 50K node (holds 2560 mappings, which is 1/10th the number of database pages) 10,000rpm disk Conclusions: Skippy could counteract 80/20 skew in 3 levels 99/1 has hot section much smaller than node size, so one level is enough Can we create split snapshots with a Skippy index efficiently? Plot shows time to complete a single-threaded updating workload of 100,000 transactions in a 66M database with each of 50/50, 80/20, and 99/1 skews We can retain a snapshot after every transaction for a 6–8% penalty Across-Time Execution (ATE) Skippy can be used to cheaply construct SPTs for a window of snapshots for “across-time” analysis Garbage collection Collecting old records from a log is simple, just chop off the tail Skippy enables cheap generational garbage collection of snapshots according to priorities (Thresher, Shrira and Xu, USENIX ‘06) P1P1 P2P2 P3P3 SPT 1 Maplog An old problem: Time travel in databases Retaining past state at logical record level (ImmortalDB, Postgres) changes arrangement of current state File system-level approaches block transactions to get consistency (VSS) A new solution: Split Snapshots Integrates with page cache and transaction manager to provide disk page-level snapshots Application declares transactionally-consistent snapshots at any time with any frequency Snapshots are retained incrementally using copy-on-write (COW), without reorganizing database All applications and access methods run unmodified on persistent, on-line snapshots Achieve good performance in same manner as the database: leverage db recovery to defer snapshot writes A new problem: How to index copy-on-write split snapshots? References Shaull, R., Shrira, L., and Xu, H. Skippy: Enabling Long-Lived Snapshots of the Long-Lived Past. ICDE 2008. Shrira, L., van Ingen, C., and Shaull, R. Time Travel in the Virtualized Past. SYSTOR 2007. Shrira L., and Xu, H. Snap: Efficient Snapshots for Back-In-Time Execution. ICDE 2005. Skippy acceleration is the ratio of number of mappings in top Skippy level to maplog Acceleration depends on: Node size (repeated mappings are only eliminated if they appear in the same node) Workload (likelihood of repetitions) Snap 1Snap 2 Start A Skippy scan that begins at Start(X) constructs the same SPT X as a maplog scan P 2 is shared by SPT 1 and SPT 2 P 3 has not been modified so SPT 1 and SPT 2 point to P 3 into the database 1.Let overwrite cycle length L be the number of page updates required to overwrite entire database 2.Overwrite cycle length determines the number of mappings that must be scanned to construct SPT 3.Let N be the number of pages in the database 4.For a uniformly random workload, L = N ln N (by the “coupon collector’s waiting time” problem) 5.Skew in the update workload lengthens overwrite cycle by introducing many more repeated mappings 6.For example, skew of 80/20 (80% of updates to 20% of pages) increases L by a factor of 4
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.