Download presentation
Presentation is loading. Please wait.
Published bySusanti Sugiarto Modified over 5 years ago
1
Understanding Performance Interference of I/O Workload in Virtualized Cloud Environments
Xing Pu21 Ling Liu1 Yiduo Mei31 Sankaran Sivathanu1 Younggyun Koh1 Calton Pu1 1Georgia Institute of Technology 2Beijing Institute of Technology 3Xi’an Jiaotong University IEEE CLOUD 2010 Presented by: Yun Liaw
2
Outline Introduction Background and Overview Interference Analysis
Performance Inference Study Related Works Conclusions and Comments 2019/9/3
3
Introduction (1/2) Virtualization Technology:
Allows diverse applications to run in the insolated environment through creating multiple virtual machines (VMs) VM’s computing resources are managed by virtual machine monitor (VMM), a.k.a hypervisor Although hypervisor can slice and allocate the resources to different VM, this study shows that applications running on a VM may still affect the performance of applications running on its neighbor VM 2019/9/3
4
Introduction (2/2) The focus of this paper:
the performance interference among different VMs running on the same hardware platform with the focus on network I/O processing 2019/9/3
5
Xen Network I/O Overview
How a packet delivers to Guest domain? NIC receives a packet and raise an interrupt Packet is sent to the (software) bridge, then delivers to the appropriate backend interface Backend raises a request to VMM for an unused memory page Frontend and Backend exchange the page descriptor Guest Domain OS receives the packet Dom0 2019/9/3
6
Testbed Settings Hardware: Hypervisor: VM Configuration: Client:
IBM ThinkCentre A52 Workstation Two 3.2GHz Intel Pentium 4 CPU 2GB 400Mhz DDR memory 250GB 7200RPM SATA2 Seagate disk Network: 1Gbit/s Ethernet Network Hypervisor: 3.4.0 Xen Hypervisor Linux Xen Kernal xen VM Configuration: CPU: Default Xen CPU scheduler configured with equal weight 512 MB memory Apache HTTP Server Client: Using httpperf tool as load generator Request: Retreving a fixed size file buffered in VM’s cache: 1KB, 4KB, 10KB, 30KB, 50KB, 70KB, 100KB
7
I/O Workloads The performance results of single guest domain when server becomes saturated
8
Interference Analysis (1/4)
Notations: Dom0, Dom1, … , Domn : The VM running on the same host Suppose Domi is serving workload i Ti : The maximum throughput of Domi Bi : The maximum throughput of Domi in basecase scenario where n equals to 1 Combined normalized throughput ratio score: 2019/9/3
9
Interference Analysis (2/4)
Xen’s I/O design made the driver domain easily become the bottleneck: every communication have to go through Dom0 and hypervisor layer
10
Interference Analysis (3/4)
Measured system-level workload characteristics (1/2): CPU utilization (CPU): the average CPU utilization of each VM VMM events per second (Event) VM switches per second (Switch) I/O count per execution (I/O Execution): the number of exchanged memory page per execution Execution definition in Xenmon: the number of times a domain is scheduled to run 2019/9/3
11
Interference Analysis (4/4)
Measured system-level workload characteristics (2/2): VM state (Waiting, Block) VM that on a host must be in one of the following states: execution state, runnable state and blocking state “Waiting” metric: the percentage of time when VM is in runnable state “Block” metric: the percentage of time when VM is in blocking state Executions per second (Execution) The “I/O count per execution” metric is calculated by dividing “Pages exchange” with this measured metric 2019/9/3
12
Performance Interference Study - Basecase Measurement of Workload Characteristics
2019/9/3
13
Performance Interference Study – Throughput Interference (1/4)
Throughput interference experiment setup: CPU bounded workload representative: 1KB Network bounded workload representative : 100KB The following experiments will analysis 3 combinations of workloads on the testbed: (1, 1), (1, 100) and (100, 100) 2019/9/3
14
Performance Interference Study – Throughput Interference (2/4)
The x-axis (Load): the percentage of maximum achieved throughput in basecase (100, 100): a. (100,100) causes network resources contention and saturates the host at 50% load b. Overhead of (100,100) may come from the high event and switch costs c. Heavy event and switch costs lead to a lot more interrupts that need to process, thus fewer I/O pages exchanges occurs Overhead of (100,100) may come from the high event and switch costs (1,1): a. Poor performance b. Lowest switch number c. Before 80% workload, (1,1) is more efficient than (1,100) (1,1) Throughput interference may be caused by fast I/O processing between guest domains and Dom0
15
Performance Interference Study – Throughput Interference (3/4)
d. Dom0 is busy for being scheduled for CPU processing on the interrupts (100,100) e, h. domain block time of (100,100) in Dom0 and guest domain are both relatively high compared to the others, which indicates VMs are frequently blocked for I/O events and switches f, i. due to the high blocking time in (100,100), as it reveals that CPU queues are not crowd, it could serve the VMs much faster d, g. Dom0 and VMM are busy to do notifications in event channel (1,1) f, i. CPU running queues are crowded waiting for more VCPU resources d, g. CPU sharing weight of Dom0 and guest domain are the same -> the trend of CPU utilization of Dom0 and guest domain are similar
16
Performance Interference Study – Throughput Interference (4/4)
Concluding remarks: Due to larger number of packets to be routed per HTTP request, the combination of network-bound workloads leads to higher event and switch cost To achieve high throughput, the combination of CPU- workloads results in guest domain competing with driver domain for CPU resource to do fast I/O executions The workload combination with least resource competition is the combination of (1, 100) Interference is highly sensitive to efficiency of driver domain, due to multiple VMs are competing for I/O processing and switching efficiency in the driver domain 2019/9/3
17
Performance Interference Study – Net I/O Interference (1/2)
Net I/O ≈ (workload size) * (request rate) (100, 100) has better network I/O in 30% ~ 70% load rates (range II) Range II: in fig h, the block time of both 100 and 1are relatively flat, while waiting time is keeping around 2% -- lesser than (1,100)_100KB 2019/9/3
18
Performance Interference Study – Net I/O Interference (2/2)
1 KB workload in (1, 100) combination gets “priority” treatment while leaving 100 KB workload blocked more often and wait longer The CPU scheduler: Credit Scheduler Default setting in this study: Each VM with equal weight cap = 0, i.e., it is working in work-conserving mode All virtual-CPU in the CPU queue are served in FIFO manner When a VM receives a interrupt while it is idle, it enters to a special state which has a higher priority to be inserted into the head of queue Consider an (1,100) combination Since 1KB is the shortest file, the request could finished faster and enter the idle state --Hypervisor put VM1 in head of the queue more frequently Thus, the effect of default CPU scheduler setting is tend to “prioritize” the CPU-intensive workloads 2019/9/3
19
Related Works There still lacks of in-depth understanding of performance factors that can impact the efficiency and effectiveness of resource multiplexing Yong et al. [8] studied the effects of performance interference between two VM hosted on the same hardware platform, but their studies are no net I/O workload involved [8] Y. Koh, R. Knauerhase, P. Brett, M. Bowman, Z. Wen and C. Pu, “An analysis of performance interference effects in virtual environments,” IEEE ISPASS, 2007
20
Conclusions This paper presents the experimental study on the performance interference in processing CPU and network intensive workload in Xen environment Through the experimental study, this paper concludes: Running network-intensive workload can lead to overhead due to extensive context switches and events in Dom0 and VMM Co-locating CPU-intensive workload incurs high CPU contention due to the demand of fast memory change Running CPU-intensive and network-intensive in conjunction incurs the least resource contention Identifying factors that impact the total demand of exchanged memory pages is critical to in-depth understanding of interference overheads 2019/9/3
21
Comments 前幾篇(或許是首篇?)對於I/O及效能在virtualized environment中的影響進行討論的paper
得出來的結論符合直覺 Lacks the discussions in latency 實驗的workload不符合真實狀況 2019/9/3
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.