Download presentation
Presentation is loading. Please wait.
Published byPiers Asher Montgomery Modified over 9 years ago
1
Dynamic Resource Allocation using Virtual Machine for Cloud Computing Environment Zhen Xiao, Weijia Song, and Qi Chen Dept. of Computer Science Peking University http://zhenxiao.com/ ChinaSys2012 Talk 1
2
Introduction Background: Nowadays Cloud computing allows business customers to scale up and down their resource usage based on needs. It uses virtualization technology to multiplex hardware resources among a large group of users. Problem: How can a cloud service provider best multiplex virtual to physical resources? 2
3
Models of Cloud Computing Services Many cloud computing services are offered in the market today: Infrastructure-as-a-Service(IaaS): Amazon EC2, Grand Cloud, Aliyun Platform-as-a-Service(PaaS): Google App Engine, Windows Azure, Sina App Engine, Baidu App Software-as-a-Service(SaaS): Salesforce, Google Apps(Calendar, Talk, Docs, Gmail) 3
4
Amazon EC2-Style Service 4 User Virtual Machines Physical Servers Virtualization I-a-a-S Service
5
PM1 PM1 PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.61 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM3 cpu:0.6 net:0.6 mem:0.05 VM4 cpu:0.3 net:0.3 mem:0.2 VM4 cpu:0.3 net:0.3 mem:0.2 Amazon EC2-Style Service VMs PMs 5
6
Goals and objectives Our goals: overload avoidance: the capacity of a PM should be sufficient to satisfy the resource needs of all VMs running on it. Otherwise, the PM is overloaded and can lead to degraded performance of its VMs. green computing: the number of PMs used should be minimized as long as they can still satisfy the needs of all VMs. Idle PMs can be put to sleep to save energy. 6
7
Overview of the rest of the talk System overview Details of the algorithm Simulation results Experiment results Conclusion 7
8
System Overview Xen Hypervisor WS Prober Dom 0 MM Adjustor Usher LNM Dom U Xen Hypervisor WS Prober Dom 0 MM Adjustor Usher LNM Dom U Xen Hypervisor WS Prober Dom 0 MM Adjustor Usher LNM Dom U Usher CTRL VM Scheduler Plugin PredictorHotspot Solver Migration list Coldspot Solver 8
9
Local resource adjustment Xen Hypervisor WS Prober Dom 0 MM Adjustor Usher LNM VM1VM 2VM 3VM 4 Memory allocated to VMCurrent WS of VM 9
10
System overview How to collect the usage statistics of resources for each VM ? The CPU and network usage can be calculated by monitoring the scheduling events in Xen (XenStat) The memory usage can be calculated by a working set prober (WS Prober) on each hypervisor to estimate the working set sizes of VMs. (We use the random page sampling technique as in the VMware ESX Server) 10
11
System overview How to predict the future resource demands of VMs and the future load of PMs? VMware ESX Server uses EWMA (exponentially weighted moving average) Our system uses FUSD (Fast Up and Slow Down) 11
12
Overview of the rest of the talk System overview Details of the algorithm Simulation results Experiments Conclusion 12
13
Our Algorithm Definitions1 We define a server as a hot spot if the utilization of any of its resources is above a hot threshold. This indicates that the server is overloaded and hence some VMs running on it should be migrated away. We define a server as a cold spot if the utilizations of all its resources are below a cold threshold. This indicates that the server is mostly idle and a potential candidate to turn off/sleep to save energy. 13
14
Parameters of the algorithm Hot threshold Cold threshold Green computing threshold Warm threshold Consolidation limit 14
15
Our Algorithm Definitions2 We use skewness to quantify the unevenness in the utilization of multiple resources on a server Let n be the number of resources we consider and be the utilization of the ith resource. We define the resource skewness of a server p as where is the average utilization of all resources for server p. 15
16
Our Algorithm Definitions3 We define the temperature of a hot spot p as the square sum of its resource utilization beyond the hot threshold Let R be the set of overloaded resources in server p and r t be the hot threshold for resource r. The definition of the temperature is : 16
17
Layout Predict results Solve hotspots Init hot threshold Generate hotspots and sort them by their temperature Any hotspots to solve? Done N N Choose a hot pm and try to solve it Y Y 17
18
Our Algorithm For each hot spot Which VM to migrate away? Where does it migrated to? 18
19
hotspot Sort the vms on the hot pm based on the resulting temperature if that vm is migrated away Find out the pm which may accept the vm without becoming hotspot and generate a target list Any vm to try? Choose the pm with the maximum decrease of skewness value after acceptting that vm to be the destination N Migrate the vm and solve (or cool) the hotspot Is the target list empty? Y Y Fail to solve this hotspot N Solve hotspot 19
20
Skewness PM1 (mem > 0.9) PM2 PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.61 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM3 cpu:0.6 net:0.6 mem:0.05 VM4 cpu:0.3 net:0.3 mem:0.2 VM4 cpu:0.3 net:0.3 mem:0.2 PM1 (mem > 0.9) PM2 PM3 PM1 VM1 cpu:0.1 net:0.1 mem:0.61 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM2 cpu:0.1 net:0.1 mem:0.3 VM3 cpu:0.6 net:0.6 mem:0.05 VM3 cpu:0.6 net:0.6 mem:0.05 PM2 VM4 cpu:0.3 net:0.3 mem:0.2 VM4 cpu:0.3 net:0.3 mem:0.2 t:0.0036 s:-0.337 t:0.0036 s:-0.337 t:0.0036 s: 0.106 t:0.0036 s: 0.106 t:0.0035 s: -1.175 t:0.0035 s: -1.175 s:0.1178 s:-0.98 20
21
Need green computing? Generate coldspots and noncoldspots, and sort coldspots by used RAM Any coldspots to solve? Choose a cold pm and try to solve it Can it be solved? Y Done N Migration list N Y Y Move the cold pm to noncoldspots N Solved num > limit? Y N Green computing 21
22
coldspot Init From all non-coldspots find out the pm whose resourse utilization is below warm threshold after acceptting the vm and generate a target list Any vm to solve? Choose a pm with the maximum decrease of skewness value after acceptting that vm to be the destination N From all the rest coldspots find out the pm whose resourse utilization is below warm threshold after acceptting the vm and generate a target list Is the target list empty? Y Y Fail to solve this coldspot N Is the target list empty? Choose a pm with the maximum decrease of skewness value after acceptting that vm to be the destination If the destination pm becomes non- coldspot, move it to non-coldspots list succeed NY Solve coldspot 22
23
PM1 PM1 PM2 PM2 PM3 VM1 cpu:0.1 net:0.1 mem:0.1 VM1 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM3 cpu:0.2 net:0.2 mem:0.25 VM3 cpu:0.2 net:0.2 mem:0.25 Skewness VM4 cpu:0.5 net:0.5 mem:0.5 VM4 cpu:0.5 net:0.5 mem:0.5 PM3 PM1 PM2 PM2 PM2 PM2 VM2 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM3 cpu:0.2 net:0.2 mem:0.25 VM3 cpu:0.2 net:0.2 mem:0.25 PM3 VM4 cpu:0.5 net:0.5 mem:0.5 VM4 cpu:0.5 net:0.5 mem:0.5 cold threshold = 0.25 warm threshold = 0.65 PM1 VM1 cpu:0.1 net:0.1 mem:0.1 VM1 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM2 cpu:0.1 net:0.1 mem:0.1 VM3 cpu:0.2 net:0.2 mem:0.25 VM3 cpu:0.2 net:0.2 mem:0.25 23
24
Compare to black box (NSDI07) The differences between our algorithm and blackbox algorithm: How to choose a vm in a hot spot to migrate away BG chooses the vm with max VSR: skewness chooses the vm that can reduce the server’s temperature the greatest How to choose a destination pm to accept the vm BG chooses the pm with min vol: skewness chooses the pm whose skewness can be reduced the greatest green computing BG has no green computing, while skewness has 24
25
Blackbox PM1 (mem > 0.9) vol:625.00 PM1 (mem > 0.9) vol:625.00 PM2 vol:2.55 PM2 vol:2.55 PM3 vol:1 PM3 vol:1 VM1 cpu:0.1 net:0.1 mem:0.61 VSR:5.19 VM1 cpu:0.1 net:0.1 mem:0.61 VSR:5.19 VM2 cpu:0.1 net:0.1 mem:0.3 VSR:5.88 VM2 cpu:0.1 net:0.1 mem:0.3 VSR:5.88 VM3 cpu:0.6 net:0.6 mem:0.05 VSR:131.6 VM3 cpu:0.6 net:0.6 mem:0.05 VSR:131.6 VM4 cpu:0.3 net:0.3 mem:0.2 VM4 cpu:0.3 net:0.3 mem:0.2 PM1 (mem > 0.9) PM2 PM3 PM1 (mem > 0.9) VM1 cpu:0.1 net:0.1 mem:0.61 VM1 cpu:0.1 net:0.1 mem:0.61 VM2 cpu:0.1 net:0.1 mem:0.3 VM2 cpu:0.1 net:0.1 mem:0.3 PM3 VM4 cpu:0.3 net:0.3 mem:0.2 VM4 cpu:0.3 net:0.3 mem:0.2 VM3 cpu:0.6 net:0.6 mem:0.05 VM3 cpu:0.6 net:0.6 mem:0.05 25
26
Analysis of the algorithm The skewness algorithm consists of three parts(let n and m be the number of PMs and VMs in the system): Load prediction: O(n+m) ~ O(n) hot spot mitigation: O(n 2 ) green computing: O(n 2 ) (if we add restriction to the cold spots to solve, it can be down to O(n)) The overall complexity of the algorithm is bounded by O(n 2 ) 26
27
Overview of the rest of the talk System overview Details of the algorithm Simulation results Experiments Conclusion 27
28
Simulation Generate work load traces from a variety of servers in our university including our faculty mail server, the central DNS server, the syslog server of our IT department, the index server of our P2P storage project, and many others a synthetic workload which is created to examine the performance of our algorithm in more extreme situations mimics the shape of a sine function (only the positive part) and ranges from 15% to 95% with a 20% random fluctuation DNS server desktop log mail server 28
29
Parameters in the simulation Hot threshold: 0.9 Cold threshold: 0.25 Warm threshold: 0.65 Green computing threshold: 0.4 Consolidation limit: 0.05 29
30
Effect of thresholds on APMs (a) Different thresholds 30 (b) #APM with synthetic load
31
Load balance Gini index of CPU and memory Gini coefficient is widely used to measure the distribution of wealth in society. A small coefficient indicates a more equal distribution. 31
32
Scalability of the algorithm (a) average decision time(b) total number of migrations (c) number of migrations per VM 32
33
Effect of load prediction 33
34
Varying VM to PM ratio 34
35
Overview of the rest of the talk System overview Details of the algorithm Simulation results Experiments Conclusion 35
36
Experiments Environment 30 servers with Intel E5420 CPUx2 and 24GB of RAM. The servers run Xen-3.3 and Linux 2.6.18. The servers are connected over a gigabyte Ethernet to three NFS centralized storage server TPC-W Benchmark: a transactional web e- Commerce benchmark. 36
37
Algorithm effectiveness 37
38
Application Performance 38
39
Algorithm effectiveness 39
40
Load Balancing 40
41
Resource balance Resource balance for mixed workloads 41
42
Comparison with the BG algorithm Black/Gray box algorithm 42
43
Comparison with the BG algorithm Skewness algorithm 43
44
Overview of the rest of the talk System overview Details of the algorithm Simulation results Experiments Conclusion 44
45
Conclusion We have presented a resource management system for Amazon EC2-style cloud computing services. We use the skewness metric to combine VMs with different resource characteristics appropriately so that the capacities of servers are well utilized. Our algorithm achieves both overload avoidance and green computing for systems with multi-resource constraints. 45
46
Thank You! 46
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.