Download presentation
Published byMeagan Wilkinson Modified over 9 years ago
1
Efficient Resource Provisioning in Compute Clouds via VM Multiplexing
Xiaoqiao Meng, Canturk Isci, Jeffrey Kephart, Li Zhang, Eric Bouillet, Dimitrios Pendarakis IBM T.J. Watson Research Center
2
Resource provisioning in cloud
Cloud creates an illusion of “infinite” pool of resources Cloud manages resources via creating and consolidating Virtual Machines Cloud resource provisioning: decision on VM size and VM placement Efficiency of resource provisioning measured by overall resource utilization Virtual machine pools Servers Network equipments Storage
3
Chance for capacity violation < 1%
VM sizing Static: fixed size upon VM creation Easy to manage Lack of elasticity in resource reuse Dynamic: size adapts to demand High resource utilization Avoiding over and under-provisioning is challenging Peak-load as VM size Chance for capacity violation < 1% CPU utilization for a VM instance in 4 months
4
Traditional VM placement
VM size Physical host capacity CPU 4 cores Memory 12 GByte CPU 8 cores Memory 24 GByte VM 1 CPU 2 cores Memory 8 GByte VIRTUALIZATION CPU 12 cores Memory 4 GByte VM 2 CPU 2 cores Memory 6 GByte VM 3 CPU 4 cores Memory 12 GByte CPU 2 cores Memory 4 GByte VM 4 Placement is done separately from sizing VM size is fixed regardless of where it is placed Target: minimize required physical servers or reduce energy cost Formulated as a multi-dimensional vector packing problem
5
Outline VM multiplexing Related work Design Applications Summary
Performance constraint VM selection Joint-VM sizing Applications Summary
6
VM workload multiplexing
Separate VM sizing VM multiplexing s1 s2 s3 We expect s3 < s1 + s2. Benefit of multiplexing ! Multiplex VMs’ workload on same physical server Aggregate multiple workload. Estimate total capacity need based on aggregated workload Performance level of each VM be preserved
7
Example for VM multiplexing
Traditional method: Provision VMs separately. Total capacity need = 1.04 cpu Alternative method: Consolidate and provision three VMs as a whole Total capacity need = 0.67 cpu First explain x-axis and y-axis, tell what each picture says. Here “cpu” refers to virtual cpu that is being allocated. Depending on virtualization implementation (e.g., P-hyper used on System P), cpu allocation can be fractional Gain from statistical multiplexing!
8
Potential Improvement by VM multiplexing
Test on servers in a Global Hosting Services 1325 physical hosts, VMs Based on 3-month CPU/memory utilization data Average capacity saving per physical server = 39%
9
Related work Commercial capacity planning tools consider each VM’s need in isolation IBM WebSphere CloudBurst, VMware capacity planner, Novell PlateSpin Recon, Lanamark Suite Some prior work consider inter-VM interaction and compatibility to improve resource provisioning Hotcloud’09: co-place VMs not contending for same resource type OSDI’08, Hotcloud’09, VEE’09: co-place VMs with memory sharing MASCOTS’09. Eurosys’09: statistical multiplexing VM to save power NOMS’08: VM consolidation by formulating an optimization problem
10
Design of enabling VM multiplexing
Describe VM performance constraint Application performance as a function of VM capacity Construct super-VM by multiplexing VMs Find VM combinations with complementary workload Joint-VM sizing Determining total capacity needs for super-VM 2 cpu 3 cpu Sizing super-VM 1 2 3 4 1 8 7 5 1 8 7 5 5 6 7 8 6 3 2 4 6 3 2 4 Describe VM performance constraint Construct super-VM by multiplexing VMs 1 cpu 1.5 cpu VM Consolidation Place super-VM on host Standard VM placement techniques plugged in
11
VM performance constraint
Workload Size c Performance constraint on VM size Number of intervals with size violation < β Total interval number time T Basis for computing VM size Parameters β, T adapt to various application types Small T and β for time-sensitive, critical applications Large T and β for time-elastic, long-running background jobs T=0, β=0 corresponds to peakload-based sizing
12
Performance constraint with VM multiplexing
Given n VMs with individual , Let If super-VM size satisfies , each VM must satisfy individual Choose the most stringent performance constraint How to explain this slide more clearly? Important feature : Total capacity need for super-VM without decreasing individual VM performance easily derive Knowing individual VM’s performance constraint
13
Construct super-VM Key for multiplexing: complementary workload patterns Approximately measured by correlation coefficient Solution Greedy search Recursively find VM pairs with most strong negative correlation Clustering Pairwise distance measured by correlation coefficient Complementary workload! Strong negative correlation In the strong negative correlation scenario, peaks in one VM’s workload coincide with the troughs in the other VM’s workload, and vice versa. The described Greedy method can find multiplexing set with size 2, but it can be easily extended to a clustering technique which can find Multiplexing set with size more than 2. Not-so-complementary workload! Strong positive correlation
14
Greedy search Three-step process
Find the VM pair (i,j) giving the highest correlation coefficient. Output (i,j) as a candidate for joint provisioning Remove VM i and j Repeat Step 1 and 2 until all input VMs are removed Correlation coefficient matrix
15
Super-VM sizing 1. Aggregation 2. Workload forecasting
3. Determine total size by super-VM performance constraint
16
Workload forecasting Decouple individual VM’s workload into fluctuating and regular components. Forecasting limited to aggregated fluctuating workload Standard timeseries forecasting methods are applied. Choose the one with smallest forecasting error based on historical data
17
Modeling forecast error
Modeling forecasting error to compensate for workload variability Applied to any forecasting method With explicit error model Compute error statistical distribution by training error model with historical data Without explicit error model Use Kernel-Density-Estimation (KDE) to estimate error statistical distribution by historical data
18
Application to VM consolidation
Tested on performance data from four cloud hosts Use First-fit-decreasing for VM consolidation Performance gain With VM multiplexing, 28%-62% less physical hosts required compared to no-multiplexing Cloud host A B C D (host, VM) (267,2020) (234,2253) (20, 185) (106,933) Sorted by CPU Memory First-fit-decreasing + No VM multiplexing Required hosts 54 51 52 7 26 29 Host saving (w.r.t reality) 79.8% 78.2% 77.8% 65.0% 75.5% 72.6% First-fit-decreasing + VM multiplexing 28 24 21 5 12 11 89.5% 89.7% 91.0% 75.0% 88.7% 89.6% Host saving (w.r.t no multiplexing) 48.1% 52.9% 59.6% 28.6% 53.8% 62.1% 5 times
19
Consolidation with various perf. constraint
Assume each VM has same performance constraint Observations: Maximum extra capacity saving is 45% for T=0, β=0 Correspond to peakload-based sizing Saving shrinks when β increases Workload dynamics make less different in terms of VM capacity violation With forecasting, VM consolidation slightly more aggressive Lead to higher actual β than expected on certain physical hosts Overall, 88%-96% of hosts have actual β than expected
20
Application to providing VM resource guarantees
Apply VM multiplexing to providing resource guarantees Define “joint reservations” instead of individual VM reservations Enforce joint reservations with the “resource pool” abstraction:
21
Application to providing VM resource guarantees
16%-75% of more VMs admitted by enabling joint-reservations. Gain subject to performance constraint Higher gain for more stringent performance constraint
22
Summary Multiplex VMs with complementary workload patterns to save more overall resource Three design components Performance constraint VM selection Joint-VM size estimation Enormous capacity saving in applications
23
Thank you! Q & A
24
Modeling forecast error
Modeling forecasting error Workload Size c time
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.