Download presentation
Presentation is loading. Please wait.
Published byMuriel Lyons Modified over 9 years ago
1
1/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Turbocharging the DBMS Buffer Pool using an SSD Jaeyoung Do, Donghui Zhang, Jignesh M. Patel, David J. DeWitt, Jeffrey F. Naughton, Alan Halverson
2
2/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Memory Hierarchy DRAM HDD Disk For over three decades… Now: a disruptive change… SSD ?? SSD wisdom: -Store hot data. -Store data with random-I/O access. Fast random I/Os; but expensive. Cache
3
3/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Take Home Message Use an SSD to extend the Buffer Pool. Implemented in Microsoft SQL Server 2008R2. Evaluated with TPC-C, E, and H. Up to 9X speedup.
4
4/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Prior Art [Holloway09] A. L. Holloway. Chapter 4: Extending the Buffer Pool with a Solid State Disk. In Adapting Database Storage for New Hardware, UW-Madison Ph.D. thesis, 2009. [KV09] Koltsidas and Viglas. The Case for Flash-Aware Multi- Level Caching. University of Edinburgh Technical Report, 2009. [KVSZ10] B. M. Khessib, K. Vaid, S. Sankar, and C. Zhang. Using Solid State Drives as a Mid-Tier Cache in Enterprise Database OLTP Applications. TPCTC’10. [CMB+10] M. Canim, G. A. Mihaila, B. Bhattacharjee, K. A. Ross, and C. A. Lang. SSD Bufferpool Extensions for Database Systems. In VLDB’10. State-of-the-art: Temperature-Aware Caching (TAC)
5
5/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Research Issues Page flow SSD admission policy SSD replacement policy Implication on checkpoint
6
6/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Implemented Designs Temperature-Aware Caching (TAC) Dual-Write (DW) Lazy-Cleaning (LC)
7
7/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Page Flow BP Operations: read evict read modify evict TAC writes a clean page to the SSD right after reading from the disk. C Buffer pool DiskSSD BP C C C Buffer pool DiskSSD BP C C Buffer pool DiskSSD BP C TACDual-WriteLazy-Cleaning
8
8/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TACDual-WriteLazy-Cleaning Page Flow BP Operations: read evict read modify evict C Buffer pool DiskSSD BP C C C Buffer pool DiskSSD BP C C Buffer pool DiskSSD BP C DW/LC writes a clean page to the SSD upon eviction from BP. CC
9
9/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TACDual-WriteLazy-Cleaning Page Flow BP Operations: read evict read modify evict C Buffer pool DiskSSD BP C C C Buffer pool DiskSSD BP C C Buffer pool DiskSSD BP C Read from the SSD: same for all. CC
10
10/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison CCC TACDual-WriteLazy-Cleaning Page Flow BP Operations: read evict read modify evict Buffer pool DiskSSD BP C C Buffer pool DiskSSD BP C Buffer pool DiskSSD BP C Upon dirtying a page, TAC does not reclaim the SSD frame. CC I DDD I I
11
11/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TACDual-WriteLazy-Cleaning D DDD I Page Flow BP Operations: read evict read modify evict Buffer pool DiskSSD BP C Buffer pool DiskSSD BP Buffer pool DiskSSD BP Upon evicting a dirty page: -TAC and DW are write through; -LC is write back. C I Lazy cleaning C
12
12/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison SSD Admission/Replacement Policies TAC –Admission: if warmer than the coldest SSD page. –Replacement: the coldest page. DW/LC –Admission: if loaded from disk using a random I/O. –Replacement: LRU2.
13
13/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Implication on Checkpoint TAC/DW –No change, because every page in the SSD is clean. LC –Needs change, to handle the dirty pages in the SSD.
14
14/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Experimental Setup Configuration MachineHP Proliant DL180 G6 Server ProcessorIntel® Xeon® L5520 2.27GHz (dual quad core) Memory20GB Disks8X SATA 7200RPM 1TB SSD140GB Fusion ioDrive 160 SLC OSWindows Server 2008 R2 DBMSSQL Server 2008 R2
15
15/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TPC-C Speedup Relative to noSSD Q: Why is LC so good? A: Because TPC-C is update intensive. In LC, dirty pages in the SSD are frequently re- referenced. 83% of the SSD references are to dirty SSD pages. LC is 9X better than noSSD, or 5X better than DW/TAC.
16
16/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TPC-E Speedup Relative to noSSD Q: Why do the three designs have similar speedups? A: Because TPC-E is read intensive. Q: Why does the highest speedup occur for 200GB database? A: For 400GB, a smaller fraction of data is cached in the SSD; For 100GB, a larger fraction of data is cached in the memory BP.
17
17/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison TPC-H Speedup Relative to noSSD Q: Why are the speedups smaller than in C or E? A: Because most I/Os are sequential. For random I/Os: Fusion is 10X faster; For sequential I/Os: 8x disks are 1.4X faster.
18
18/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Disks are the Bottleneck As long as disks are the bottleneck… Using less expensive SSDs may be good enough. 8 Disks SSD capacity reached! about half capacity I/O traffic to the disks and SSD, for TPC-E 200GB.
19
19/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Long Ramp-up Time If restarts are frequent… Restart from the SSD may reduce rampup time. TPC-E (200GB) Q: Why does rampup take 10 hours? A: Because the SSD is being filled slowly, gated by the random read speed of the disks.
20
20/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Conclusions SSD buffer pool extension is a good idea. –We observed a 9X speedup (OLTP) and a 3X speedup (DSS). The choice of design depends on the update frequency. –For update-intensive (TPC-C) workloads: LC wins. –For read-intensive (TPC-E or H) workloads: DW/LC/TAC have similar performance. Mid-range SSDs may be good enough. –With 8 disks, only half of FusionIO’s bandwidth is used. Caution: rampup time may be long. –If restarts are frequent, the DBMS should restart from the SSD.
21
21/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Backup Slides
22
22/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Architectural Change Buffer Manager I/O Manager Disk BP Buffer Manager I/O Manager Disk SSD Manager SSD BP BP
23
23/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Data Structures
24
24/20 SIGMOD 2011Turbocharging the DBMS Buffer Pool using an SSD Microsoft Jim Gray Systems Lab & University of Wisconsin, Madison Further Issues Aggressive filling SSD throttle control Multi-page I/O request Asynchronous I/O handling SSD partitioning Gather write
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.