Download presentation
Presentation is loading. Please wait.
Published byTucker Lemmons Modified over 10 years ago
1
Difference Engine: Harnessing Memory Redundancy in Virtual Machines by Diwaker Gupta et al. presented by Jonathan Berkhahn
2
Motivation Virtualization has improved and spread over the past decade Servers often run at 5-10% of CPU Capacity o High capacity needed for peak workloads o Fault isolation for certain services o Certain services run best on particular configurations Solution: Virtual Machines
3
Problem CPU's suited to multiplexing, main memory is not Upgrading not an ideal option o Expensive o Limited by slots on the motherboard o Limited by ability to support higher capacity modules o Consumes significant power, and therefore produces significant heat Further exacerbated by current trends toward many-core systems
4
How do we fix this memory bottleneck for virtual machines?
5
Difference Engine Implemented as an extension to the Xen VMM o Sub-page granularity page sharing o In-memory page compression Reduces the memory footprint by up to 90% for homogenous workloads and up to 65% for heterogeneous workloads
6
Outline Related Work Difference Engine algorithms Implementation Evaluation
7
Page Sharing Transparent page sharing o Requires guest OS modification Content-based o VMWare ESX
8
Delta Encoding Manber o Rabin fingerprints o Inefficient Broder o Combined Rabin fingerprints and sampling Both focused on identifying similar files, but not encoding the differences
9
Memory Compression Douglis et al. o Sprite OS o Double-edge sword Wilson et al. o Previous results due to slow hardware o Developed algorithms that exploit virtual memory structure
10
Outline Related Work Difference Engine algorithms Implementation Evaluation
11
Page Sharing Content-based Hash pages and index by hash value o Hash collisions indicate a potential match o Compare byte-by-byte to ensure pages are identical o Reclaim one page, update virtual memory o Writes cause a page fault trapped by the VMM
12
Patching Sharing of similar pages Identify similar pages, store differences as a "patch" Compresses multiple pages down to single reference copy and a collection of patches
13
Identifying Candidate Pages
14
Compression Compression of live pages in main memory o Useful only for high compression ratios VMM traps requests for compressed pages
15
Overview
16
Paging Machine Memory Last resort Copy pages to disk Extremely expensive operation Leaves policy decisions to end user
17
Caveat Both patching and compression are only useful for infrequently accessed pages. So, how do we determine "infrequent"?
18
Clock Not-Recently Used policy Checks if page has been referenced/modified o C1 - Recently Modified o C2 - Recently Referenced o C3 - Not Recently Accessed o C4 - Not Accessed for a While
19
Outline Related Work Difference Engine algorithms Implementation Evaluation
20
Implementation Modification to Xen VMM Roughly 14,500 lines of code, plus 20,000 for ports of existing patching and compression algorithms Shadow Page Table o Difference Engine relies on modifying the shadow page and P2M tables o Ignored pages mapped by Dom-0 Complications: Real Mode and I/O support
21
Complications Booting on bare metal disables paging o Requires paging to be enabled within guest OS I/O o Xen hypervisor emulates I/O hardware with a Dom-0 process ioemu, which directly accesses guest pages o Conflicts with policy of not acting on Dom-0 pages o Unmap VM pages every 10 seconds
22
Clock NRU policy Tracked by Referenced and Modified bits on each page Modified Xen's shadow page tables to set bits when creating mappings C1 - C4
23
Page Sharing Hash table in Xen heap o Memory limitations - 12 Mb Hash table only holds entries for 1/5 memory o 1.76 Mb hash table Covers all of memory in 5 passes
24
Detecting Similar Pages Hash Similarity Detector (2,1) o Hash similarity table cleared after all pages have been considered Only building the patch and replaced the page requires a lock o May result in a differently sized patch, but will still be correct
25
Compression & Disk Paging Antagonistic relationship with patching o Compressed/Disk pages can't be patched Delayed until all pages have been checked for similarity and the page has not been accessed for a while (C4) Disk paging done by daemon running in Dom-0
26
Disk Paging
27
Outline Related Work Difference Engine algorithms Implementation Evaluation
28
Experiments run on dual-processor, dual-core 2.33 GHz Intel Xeon, 4 KB page size Tested each operation individually for overhead
29
Page Lifetime
30
Homogenous VMs
31
Homogenous Workload
32
Heterogeneous Workload
33
Heterogeneous Workload 2
34
Utilizing Savings
35
Conclusion Main memory is a primary bottleneck for VMs Significant memory savings can be achieved from: o Sharing identical pages o Patching similar pages o In-memory page compression Implemented DE and showed memory savings of as much as 90% Saved memory can be used to run more VMs
36
Discussion
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.