U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science CRAMM: Virtual Memory Support for Garbage-Collected Applications Ting Yang, Emery.

Slides:



Advertisements
Similar presentations
Virtual Memory (Chapter 4.3)
Advertisements

Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
Virtual Memory (II) CSCI 444/544 Operating Systems Fall 2008.
Paging: Design Issues. Readings r Silbershatz et al: ,
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Matthew Hertz, Yi Feng, & Emery Berger Department of Computer Science University.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 1 MC 2 –Copying GC for Memory Constrained Environments Narendran Sachindran J. Eliot.
Energy Efficiency through Burstiness Athanasios E. Papathanasiou and Michael L. Scott University of Rochester, Computer Science Department Rochester, NY.
Virtual Memory Introduction to Operating Systems: Module 9.
Segmentation and Paging Considerations
MC 2 : High Performance GC for Memory-Constrained Environments - Narendran Sachindran, J. Eliot B. Moss, Emery D. Berger Sowmiya Chocka Narayanan.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 2007 Exterminator: Automatically Correcting Memory Errors with High Probability Gene.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts Amherst Operating Systems CMPSCI 377 Lecture.
MC 2 : High Performance GC for Memory-Constrained Environments N. Sachindran, E. Moss, E. Berger Ivan JibajaCS 395T *Some of the graphs are from presentation.
1 The Compressor: Concurrent, Incremental and Parallel Compaction. Haim Kermany and Erez Petrank Technion – Israel Institute of Technology.
Memory Management Design & Implementation Segmentation Chapter 4.
Operating Systems CMPSCI 377 Lecture 11: Memory Management
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Advanced Compilers CMPSCI 710.
Memory Management 2010.
U NIVERSITY OF M ASSACHUSETTS Department of Computer Science Automatic Heap Sizing Ting Yang, Matthew Hertz Emery Berger, Eliot Moss University of Massachusetts.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Garbage Collection Without Paging Matthew Hertz, Yi Feng, Emery Berger University.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Quantifying the Performance of Garbage Collection vs. Explicit Memory Management.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
1 Reducing Generational Copy Reserve Overhead with Fallback Compaction Phil McGachey and Antony L. Hosking June 2006.
OS Spring’04 Virtual Memory: Page Replacement Operating Systems Spring 2004.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Emery Berger University of Massachusetts, Amherst Operating Systems CMPSCI 377 Lecture.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 2006 Exterminator: Automatically Correcting Memory Errors Gene Novark, Emery Berger.
Operating Systems ECE344 Ding Yuan Page Replacement Lecture 9: Page Replacement.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
A Low-Cost Memory Remapping Scheme for Address Bus Protection Lan Gao *, Jun Yang §, Marek Chrobak *, Youtao Zhang §, San Nguyen *, Hsien-Hsin S. Lee ¶
1 Geiger: Monitoring the Buffer Cache in a Virtual Machine Environment Stephen T. Jones Andrea C. Arpaci-Dusseau Remzi H. Arpaci-Dusseau Department of.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Processes & Threads Emery Berger and Mark Corner University.
Ch. 4 Memory Mangement Parkinson’s law: “Programs expand to fill the memory available to hold them.”
Dynamic Object Sampling for Pretenuring Maria Jump Department of Computer Sciences The University of Texas at Austin Stephen M. Blackburn.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Concurrency Patterns Emery Berger and Mark Corner University.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Advanced Compilers CMPSCI 710 Spring 2004 Lecture 1 Emery Berger University of Massachusetts,
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science S HERIFF : Precise Detection & Automatic Mitigation of False Sharing Tongping Liu,
CS 149: Operating Systems March 3 Class Meeting Department of Computer Science San Jose State University Spring 2015 Instructor: Ron Mak
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science 1 Automatic Heap Sizing: Taking Real Memory into Account Ting Yang, Emery Berger,
September 11, 2003 Beltway: Getting Around GC Gridlock Steve Blackburn, Kathryn McKinley Richard Jones, Eliot Moss Modified by: Weiming Zhao Oct
Investigating the Effects of Using Different Nursery Sizing Policies on Performance Tony Guan, Witty Srisa-an, and Neo Jia Department of Computer Science.
1 Virtual Machine Memory Access Tracing With Hypervisor Exclusive Cache USENIX ‘07 Pin Lu & Kai Shen Department of Computer Science University of Rochester.
By Teacher Asma Aleisa Year 1433 H.   Goals of memory management  To provide a convenient abstraction for programming.  To allocate scarce memory.
U NIVERSITY OF M ASSACHUSETTS, A MHERST Department of Computer Science Yi Feng & Emery Berger University of Massachusetts Amherst A Locality-Improving.
CS 3204 Operating Systems Godmar Back Lecture 21.
380C lecture 19 Where are we & where we are going –Managed languages Dynamic compilation Inlining Garbage collection –Opportunity to improve data locality.
1 Garbage Collection Advantage: Improving Program Locality Xianglong Huang (UT) Stephen M Blackburn (ANU), Kathryn S McKinley (UT) J Eliot B Moss (UMass),
Review °Apply Principle of Locality Recursively °Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings.
Lecture Topics: 11/24 Sharing Pages Demand Paging (and alternative) Page Replacement –optimal algorithm –implementable algorithms.
Project Presentation By: Dean Morrison 12/6/2006 Dynamically Adaptive Prepaging for Effective Virtual Memory Management.
Energy Efficient Prefetching and Caching Athanasios E. Papathanasiou and Michael L. Scott. University of Rochester Proceedings of 2004 USENIX Annual Technical.
U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Computer Systems Principles Virtual Memory & Paging Emery Berger and Mark Corner.
1 The Garbage Collection Advantage: Improving Program Locality Xianglong Huang (UT), Stephen M Blackburn (ANU), Kathryn S McKinley (UT) J Eliot B Moss.
Memory Management 6/20/ :27 PM
Memory Management for Scalable Web Data Servers
Main Memory Management
Lecture 14 Virtual Memory and the Alpha Memory Hierarchy
The Operating System Memory Manager
Page Replacement.
Transparent Contribution of Memory
Operating Systems CMPSC 473
Virtual Memory: Working Sets
Garbage Collection Advantage: Improving Program Locality
CSE 542: Operating Systems
Chapter 9: Virtual Memory CSS503 Systems Programming
Program-level Adaptive Memory Management
Transparent Contribution of Memory
Presentation transcript:

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science CRAMM: Virtual Memory Support for Garbage-Collected Applications Ting Yang, Emery Berger, Scott Kaplan †, Eliot Moss Department of Computer Science Dept. of Math and Computer Science † University of Massachusetts Amherst College

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 2 Motivation: Heap Size Matters GC languages Java, C#, Python, Ruby, etc. Increasingly popular Heap size critical Too large: Paging (10-100x slower) Too small: Excessive # collections hurts throughput Heap Size (120MB) Memory (100MB) JVM VM/OS Disk Heap Size (60MB) Memory (100MB)

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 3 What is the right heap size? Find the sweet spot: Large enough to minimize collections Small enough to avoid paging BUT: sweet spot changes constantly (multiprogramming) CRAMM: Cooperative Robust Automatic Memory Management Goal: through cooperation with OS & GC, keep garbage-collected applications running at their sweet spot

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 4 CRAMM Overview Cooperative approach: Collector-neutral heap sizing model (GC) suitable for broad range of collectors Statistics-gathering VM (OS) Automatically resizes heap in response to memory pressure Grows to maximize space utilization Shrinks to eliminate paging Improves performance by up to 20x Overhead on non-GC apps: 1-2.5%

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 5 Outline Motivation CRAMM overview Automatic heap sizing Statistics gathering Experimental results Conclusion

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science GC: How do we choose a good heap size?

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 7 GC: Collector-neutral model SemiSpace (copying) a ≈ ½ b ≈ JVM, code + app’s live size heapUtilFactor: constant dependent on GC algorithm Fixed overhead: Libraries, codes, copying (app’s live size)

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 8 GC: a collector-neutral WSS model SemiSpace (copying) MS (non-copying) a ≈ ½ b ≈ JVM, code + app’s live size a ≈ 1 b ≈ JVM, code heapUtilFactor: constant dependent on GC algorithm Fixed overhead: Libraries, codes, copying (app’s live size)

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 9 GC: Selecting new heap size GC: heapUtilFactor (a) & cur_heapSize VMM: WSS & available memory Change heap size so that working set just fits in current available memory

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 10 Heap Size vs. Execution time, WSS

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science VM: How do we collect information to support heap size selection? (with low overhead) WSS, Available Memory

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 12 defghijklmncklmncbcdefghijklmncklmnabaabcdefghijklmnabcdefghijklmnabdefghijcklnmabcdefghijkmnlabcdefghijlmnkabdefghijklmnc4n3211 Calculating WSS w.r.t 5% Memory reference sequence LRU Queue Pages in LRU order Hit Histogram Fault Curve mn 114 lmnklmncklmnabcdefghijklmncklmn Associated with each LRU position pages faults

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 13 WSS: hit histogram Not possible in standard VM: Global LRU queue No per process/file information or control  Difficult to estimate WSS / available memory CRAMM VM: Per process/file page management Page list: Active, Inactive, Evicted Adds & maintains histogram

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 14 WSS: managing pages/process Active (CLOCK) Inactive (LRU)Evicted (LRU) Major fault Evicted Refill & Adjustment Minor fault Pages protected by turning off permissions (minor fault) Pages evicted to disk. (major fault) Header Page Des AVL node Histogram Pages faults

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 15 WSS: controlling overhead Buffer Active (CLOCK) Inactive (LRU)Evicted (LRU) Pages protected by turning off permissions (minor fault) Pages evicted to disk. (major fault) Header Page Des AVL node Histogram Pages faults control boundary: 1% of execution time

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 16 Calculating available memory What’s “available” (not “free”)? Page cache Policy: are pages from closed files “free”? Yes: easy to distinguish in CRAMM (on separate list) Available Memory = all resident application pages + free pages in the system + pages from closed files

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science Experimental Results

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 18 Experimental Evaluation Experimental setup: CRAMM (Jikes RVM + Linux) vs. unmodified Jikes RVM, JRockit, HotSpot GC: GenCopy, CopyMS, MS, SemiSpace, GenMS SPECjvm98, DaCapo, SPECjbb, ipsixql + SPEC2000 Experiments: Overhead w/o memory pressure Dynamic memory pressure

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 19 CRAMM VM: Efficiency Overhead: on average, 1% - 2.5%

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 20 Elapsed Time (seconds) GenMS – SPECjbb (Modified) w/ 160M memory stock w/o pressure CRAMM w/ pressure # transactions finished (thousands) Stock w/ pressure stock w/o pressure secs 1136 majflts CRAMM w/ pressure secs 1613 majflts 98% CPU Stock w/ pressure secs majflts 48% CPU Initial heap size: 120MB Dynamic Memory Pressure (1)

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 21 Dynamic Memory Pressure (2)

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 22 Conclusion Cooperative Robust Automatic Memory Management (CRAMM) GC: Collector-neutral WSS model VM: Statistics-gathering virtual memory manager Dynamically chooses nearly-optimal heap size for GC applications Maximizes use of memory without paging Minimal overhead (1% - 2.5%) Quickly adapts to memory pressure changes

U NIVERSITY OF M ASSACHUSETTS A MHERST Department of Computer Science 23