A Robust Main-Memory Compression Scheme Magnus Ekman and Per Stenström Chalmers University of Technology Göteborg, Sweden.

Slides:



Advertisements
Similar presentations
Gennady Pekhimenko Advisers: Todd C. Mowry & Onur Mutlu
Advertisements

Chapter 4 Memory Management Basic memory management Swapping
Part IV: Memory Management
A Performance Comparison of DRAM Memory System Optimizations for SMT Processors Zhichun ZhuZhao Zhang ECE Department Univ. Illinois at ChicagoIowa State.
Jaewoong Sim Alaa R. Alameldeen Zeshan Chishti Chris Wilkerson Hyesoon Kim MICRO-47 | December 2014.
Fabián E. Bustamante, Spring 2007
Reducing Leakage Power in Peripheral Circuits of L2 Caches Houman Homayoun and Alex Veidenbaum Dept. of Computer Science, UC Irvine {hhomayou,
ACM: An Efficient Approach for Managing Shared Caches in Chip Multiprocessors Mohammad Hammoud, Sangyeun Cho, and Rami Melhem Presenter: Socrates Demetriades.
1 Improving Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers By Sreemukha Kandlakunta Phani Shashank.
High Performing Cache Hierarchies for Server Workloads
CMSC 611: Advanced Computer Architecture Cache Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted from.
1 A Self-Tuning Cache Architecture for Embedded Systems Chuanjun Zhang*, Frank Vahid**, and Roman Lysecky *Dept. of Electrical Engineering Dept. of Computer.
Linearly Compressed Pages: A Main Memory Compression Framework with Low Complexity and Low Latency Gennady Pekhimenko, Vivek Seshadri , Yoongu Kim,
Exploiting Spatial Locality in Data Caches using Spatial Footprints Sanjeev Kumar, Princeton University Christopher Wilkerson, MRL, Intel.
Allocating Memory.
June 20 th 2004University of Utah1 Microarchitectural Techniques to Reduce Interconnect Power in Clustered Processors Karthik Ramani Naveen Muralimanohar.
1  1998 Morgan Kaufmann Publishers Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
1 Chapter Seven Large and Fast: Exploiting Memory Hierarchy.
Cluster Prefetch: Tolerating On-Chip Wire Delays in Clustered Microarchitectures Rajeev Balasubramonian School of Computing, University of Utah July 1.
Techniques for Efficient Processing in Runahead Execution Engines Onur Mutlu Hyesoon Kim Yale N. Patt.
Adaptive Cache Compression for High-Performance Processors Alaa R. Alameldeen and David A.Wood Computer Sciences Department, University of Wisconsin- Madison.
Memory: Virtual MemoryCSCE430/830 Memory Hierarchy: Virtual Memory CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu.
Restrictive Compression Techniques to Increase Level 1 Cache Capacity Prateek Pujara Aneesh Aggarwal Dept of Electrical and Computer Engineering Binghamton.
1 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value is stored as a charge.
1 Lecture 7: Caching in Row-Buffer of DRAM Adapted from “A Permutation-based Page Interleaving Scheme: To Reduce Row-buffer Conflicts and Exploit Data.
Skewed Compressed Cache
Code Coverage Testing Using Hardware Performance Monitoring Support Alex Shye, Matthew Iyer, Vijay Janapa Reddi and Daniel A. Connors University of Colorado.
A Characterization of Processor Performance in the VAX-11/780 From the ISCA Proceedings 1984 Emer & Clark.
1 CSE SUNY New Paltz Chapter Seven Exploiting Memory Hierarchy.
Adaptive Cache Compression for High-Performance Processors Alaa Alameldeen and David Wood University of Wisconsin-Madison Wisconsin Multifacet Project.
Compressed Memory Hierarchy Dongrui SHE Jianhua HUI.
Interactions Between Compression and Prefetching in Chip Multiprocessors Alaa R. Alameldeen* David A. Wood Intel CorporationUniversity of Wisconsin-Madison.
Base-Delta-Immediate Compression: Practical Data Compression for On-Chip Caches Gennady Pekhimenko Vivek Seshadri Onur Mutlu, Todd C. Mowry Phillip B.
Defining Anomalous Behavior for Phase Change Memory
A Low-Cost Memory Remapping Scheme for Address Bus Protection Lan Gao *, Jun Yang §, Marek Chrobak *, Youtao Zhang §, San Nguyen *, Hsien-Hsin S. Lee ¶
1 Reducing DRAM Latencies with an Integrated Memory Hierarchy Design Authors Wei-fen Lin and Steven K. Reinhardt, University of Michigan Doug Burger, University.
Comparing Memory Systems for Chip Multiprocessors Leverich et al. Computer Systems Laboratory at Stanford Presentation by Sarah Bird.
Lecture 19: Virtual Memory
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Embedded System Lab. 김해천 Linearly Compressed Pages: A Low- Complexity, Low-Latency Main Memory Compression Framework Gennady Pekhimenko†
Revisiting Hardware-Assisted Page Walks for Virtualized Systems
Decoupled Compressed Cache: Exploiting Spatial Locality for Energy-Optimized Compressed Caching Somayeh Sardashti and David A. Wood University of Wisconsin-Madison.
Authors – Jeahyuk huh, Doug Burger, and Stephen W.Keckler Presenter – Sushma Myneni Exploring the Design Space of Future CMPs.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Demand Paging.
Chapter 5 Memory III CSE 820. Michigan State University Computer Science and Engineering Miss Rate Reduction (cont’d)
Multilevel Caches Microprocessors are getting faster and including a small high speed cache on the same chip.
1 Chapter Seven CACHE MEMORY AND VIRTUAL MEMORY. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4.
Dynamic Branch Prediction During Context Switches Jonathan Creekmore Nicolas Spiegelberg T NT.
1 Lecture: Virtual Memory Topics: virtual memory, TLB/cache access (Sections 2.2)
Exploiting Value Locality in Physical Register Files Saisanthosh Balakrishnan Guri Sohi University of Wisconsin-Madison 36 th Annual International Symposium.
On the Importance of Optimizing the Configuration of Stream Prefetches Ilya Ganusov Martin Burtscher Computer Systems Laboratory Cornell University.
1 Chapter Seven. 2 SRAM: –value is stored on a pair of inverting gates –very fast but takes up more space than DRAM (4 to 6 transistors) DRAM: –value.
Quantifying and Controlling Impact of Interference at Shared Caches and Main Memory Lavanya Subramanian, Vivek Seshadri, Arnab Ghosh, Samira Khan, Onur.
A Robust Main-Memory Compression Scheme (ISCA 06) Magnus Ekman and Per Stenström Chalmers University of Technolog, Göteborg, Sweden Speaker: 雋中.
Memory Management.
Chang Hyun Park, Taekyung Heo, and Jaehyuk Huh
Memory COMPUTER ARCHITECTURE
Lecture: Large Caches, Virtual Memory
Zhichun Zhu Zhao Zhang ECE Department ECE Department
Improving Memory Access 1/3 The Cache and Virtual Memory
Selective Code Compression Scheme for Embedded System
Prof. Onur Mutlu and Gennady Pekhimenko Carnegie Mellon University
Chapter 8: Main Memory.
Application Slowdown Model
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
Using Dead Blocks as a Virtual Victim Cache
CARP: Compression-Aware Replacement Policies
CMP Design Choices Finding Parameters that Impact CMP Performance
Lois Orosa, Rodolfo Azevedo and Onur Mutlu
Virtual Memory 1 1.
Presentation transcript:

A Robust Main-Memory Compression Scheme Magnus Ekman and Per Stenström Chalmers University of Technology Göteborg, Sweden

Motivation Memory resources are wasted to compensate for the increasing processor/memory/disk speedgap >50% of die size occupied by caches >50% of cost of a server is DRAM (and increasing) Lossless data compression techniques have the potential to free up more than 50% of memory resources. Unfortunately, compression introduces several challenging design and performance issues

core L1-cache L2-cache Main memory space Request Data

core L1-cache L2-cache Compressed main memory space Translation table Address Translation Request Decompressor Data Fragmented compressed main memory space

Contributions A low-overhead main-memory compression scheme: Low decompression latency by using simple and fast algorithm (zero aware) Fast address translation by a proposed small translation structure that fits on the processor die Reduction of fragmentation through occassional relocation of data when compressibility varies Overall, our compression scheme frees up 30% of the memory at a marginal performance loss of 0.2%!

Outline Motivation Issues Contributions Effectiveness of Zero-Aware Compressors Our Compression Scheme Performance Results Related Work Conclusions

Frequency of zero-valued locations 12% of all 8KB pages only contain zeros 30% of all 64B blocks only contain zeros 42% of all 4B words only contain zeros 55% of all bytes are zero! Zero-aware compression schemes have a great potential!

Evaluated Algorithms Zero aware algorithms: FPC (Alameldeen and Wood) + 3 simplified versions For comparison, we also consider: X-Match Pro (efficient hardware implementations exist) LZSS (popular algorithm, previously used by IBM for memory compression) Deflate (upper bound on compressibility)

Resulting Compressed Sizes Main observations: FPC and all its variations can free up about 45% of memory LZSS and X-MatchPro only marginally better in spite of complexity Deflate can free up about 80% of memory but not clear how to exploit it Fast and efficient compression algorithms exist! SpecInt SpecFP Server

Outline Motivation Issues Contributions Effectiveness of Zero-Aware Compressors Our Compression Scheme Performance Results Related Work Conclusions

Uncompressed data Compressed data Compressed fragmented data Block size vector Address translation A block is assigned one out of n predefined sizes. In this example n= Block Size Table (BST)TLB Address Calculator OS changes –Block size vector is kept in page table –Each page is assigned one out of k predefined sizes. Physical address grows with log 2 k bits. The Block Size Table enables fast translation!

Size changes and compaction Sub-page 0 Sub-page 1 sub-page slack Terminology: block overflow/underflow, sub-page overflow/underflow, page overflow/underflow Block overflow slack Block underflow page slack

Handling of overflows/underflows Block and sub-page overflows/underflows implies moving data within a page On a page overflow/underflow the entire page needs to be moved to avoid having to move several pages Block and sub-page overflows/underflows are handled in hardware by an off-chip DMA-engine On a page overflow/underflow a trap is taken and the mapping for the page is changed Processor has to stall if it accesses data that is being moved!

core L1-cache L2-cache BST Calc. Sub-page 0Sub-page 1 Page 0 CompDec. DMA- engine Putting it all together

Experimental Methodology Key issues to experimentally evaluate: Compressibility and impact of fragmentation Performance losses for our approach Simulation approaches (both using Simics): Fast functional simulator (in-order, 1 instr/cycle) allowing entire benchmarks to be run Detailed microarchitecture simulator driven by a single sim-point per application

Architectural Parameters Instr. Issue4-w ooo Exec units4 int, 2 int mul/div Branch pred.16-k entr. gshare, 2k BTB L1 I-cache32 k, 2-w, 2-cycle L1 D-cache32 k, 4-w, 2-cycle L2 cache512k/thread, 8-w, 16 cycles Memory latency150 cycles 1 Block lock-out4000 cycles 2 Subpage lock-out23000 cycles 2 Page lock-out23000 cycles 2 2 Conservatively assumes 200 MHz DDR2 DRAM; leads to long lock-out time 1 Aggressive for future processsors to not give advantage to our compr. scheme Predefined sizes Block Subpage Page Loads to a block only containing zeros can retire without accessing memory!

Benchmarks SPEC2000 ran with reference set. SAP and SpecJBB ran 4 billion instructions per thread SpecInt2000 Bzip Gap Gcc Gzip Mcf Parser Perlbmk Twolf Vpr SpecFP2000 Ammp Art Equake Mesa Server SAP S&D SpecJBB

Overflows/Underflows Main observations: About 1 out of every thousand instruction causes a block overflow/underflow The use of subpages cuts down the number of page-level relocations by one order of magnitude Memory bandwidth goes up by 58% with subpages and 212% without. Note that this is not the bandwidth to the processor chip Fragmentation with the use of a hierarchy of pages, sub- pages and blocks, reduce memory savings to 30% Note: y axis is logarithmic

Detailed Performance Results We used a single simpoint per application according to [Sherwood et al. 2002] Main observations: Decompression latency reduces performance by 0.5% on average Misses to zero-valued blocks increases performance by 0.5% on average Factoring in also relocation/compaction losses, performance losses are only 0.2%!

Related Work Early work on main memory compression: –Douglis [1993], Kjelso et al. [1999], and Wilson et al. [1999] –These works aimed at reducing paging overhead so the significant decompression and address translation latencies were offset by the wins More recently –IBM MXT [Abali et al. 2001] –Compresses entire memory with LZSS (64 cycle decompression latency) –Translation through memory resident translation table –Shields latency by huge (at that point in time) 32-MByte cache. Sensitive to working set size Compression algorithm –Inspired by frequent-value locality work by Zhang, Yang and Gupta 2000 –Compression algorithm from Alameldeen and Wood, 2004

Concluding Remarks It is possible to free up significant amounts of memory resources with virtually zero performance overhead This was achieved by exploiting zero-valued bytes which account for as much as and 55% of the memory contents leveraging a fast compression/decompression scheme a fast translation mechanism a hierarchical memory layout which offers some slack at the block, subpage, and page level Overall, 30% of memory could be freed up at a loss of 0.2% on average

Backup Slides

Fragmentation - Results

% misses to zero-valued blocks bzigapgccgzimcfparpertwovorvpramartequmessapjbb 512k MB For gap, gcc, gzip, mesa more than 20% of the misses request zero- valued blocks; for the rest the percentage is quite small

Frequent Pattern Compression (Alameldeen and Wood) PrefixPattern encodedData size 00Zero word0 01One byte sign-extended8 bits 10halfword sign-extended16 bits 11Uncompressed32 bits 3 bits (for runs up to 8 ”0”) Zero run Each 32-bit word is coded using a prefix plus data: PrefixPattern encodedData size 000Zero run3 bits (for runs up to 8 ”0”) 0014-bit sign-extended4 bits 010One byte sign-extended8 bits 011Half word sign-extended16 bits 100Half word padded with a zero halfword16 bits 101Two halfwords, each a byte sign-ext.16 bits 110Word consisting of repeated bytes8 bits 111Uncompressed32 bits