Download presentation
Presentation is loading. Please wait.
Published byWendy Dean Modified over 8 years ago
1
1 PERFORMANCE DIFFERENTIATION OF NETWORK I/O in XEN by Kuriakose Mathew (08305062) under the supervision of Prof. Purushottam Kulkarni and Prof. Varsha Apte
2
2 Outline Introduction Need for Network I/O differentiation in Xen Implementation of the Tool for Network I/O configuration in Xen Validation of the Tool Conclusion and Future work
3
3 Virtualization Virtualization – Abstraction of resources Single OS system Virtualized system Virtualization[http://software.intel.com]
4
4 Need for Performance Differentiation Client2 Client1 Virtualization[http://software.intel.com]
5
5 Xen- Open Source VMM VMM Hardware Dom-U Dom-0 Xen Architecture [David Chisnall.The Definitive Guide]
6
6 Performance Differentiation in Xen CPU credit scheduler –Weights – share of the physical CPU time that the domain gets –Caps – maximum physical CPU time that the domain gets –Weights – relative eg 128:256 256:512 –Caps – absolute 20% 50% –Eg xm sched-credit –d -w 256 –c 20 Tools exist for configuration No means of configuration no weighted scheduling Provides fair scheduling
7
7 Network I/O Differentiation in Xen With existing methods in Xen, network bandwidth utilization cannot be controlled 25% CPU, 2MBps Domain 1 35% CPU, 2MBps Domain 2 Dom1 – 20% CPU limit Dom2 – 80% CPU limit 25% CPU, 2MBps Domain 1 35% CPU, 5MBps Domain 2 Dom1 – 20% CPU limit Dom2 – 80% CPU limit Need for a separate control mechanism for Network I/O Differentiation
8
8 Previous Work at IIT B DDP 08-09 work proposed and implemented a weighted Network I/O scheduler providing bandwidth limits and guarantees. Implemented bandwidth limits and guarantees for VMs in Xen limits – Maximum bandwidth usage for a VM guarantees – Amount of available bandwidth that a VM is guaranteed
9
9 Short Comings of Previous Work Shortcomings Hard coding of values in Netback driver Recompilation of kernel for change of parameter Lack of dynamic configuration tool Difficulty in doing experimentation and validation Issues with interference of CPU scheduler Complex sharing of credit values
10
10 Overall MTP Goal Design and implement a tool for specification and dynamic reconfiguration of limits for Network I/O in Xen Simplification of the existing algorithm to provide bandwidth guarantees to Network I/O Study the interference of Xen CPU scheduler with the Network I/O configuration Validation with realistic I/O intensive applications
11
11 High-level Specification of the Tool Dynamically specify the bandwidth usage limits and guarantees of a virtual machine Usage Xmsetbw –d -g -l domain_name - name of the domain whose bandwidth parameters need to be set guarantee – the guarantee in bandwidth provided to the domain limit – the maximum limit in bandwidth usage for the domain
12
12 Xen Device Driver Model Front-end Back-end Xen device driver model [David Chisnall. The Definitive Guide] Tool input
13
13 Packet Transmission in Xen Packet Transmission/Reception [Sriram Govindan. Xen and co.: Communication-aware CPU Scheduling] Xenstore
14
14 Xenstore and Xenbus Xenstore Exchange out of band information Hierarchical filesystem like database used for sharing small amount of info between domains Contains 3 main paths /vm /local/domain /tools
15
15 Xenbus Interface for Xenstore USERSPACE VMs XENBUS XENSTORE
16
16 Xenbus(cont.) Provide APIs for reading and writing data Support Transactions for group execution of operations Watches Associated with a key in xenstore Change in value causes a call-back to be triggered User-kernel interaction register_xenbus_watch register_xenstore_notifier
17
17 Implementation of Tool for Dynamic Network Configuration Userspace tool Domain Name Domain id Xenstore Format string and write to predefined-path Write User input Command
18
18 Implementation of Tool (cont.) Register initial Call-back Register pre-defined path callback Read path Update variable Read initial value If structures modified Xenstore write Read new value Transmit packets Array of structures and flags Xenstore Register Call-back1 Register Call-back2 Call-back1 Call-back2 Start-up YN XENBUS NETBACK
19
19 Modification in Netback driver Credit scheduling in the netback driver By default, network scheduler assigns equal credits Fair sharing of bandwidth Modified to assign weighted credits Network Scheduler[ddp]
20
20 Can the bandwidth limit for each domain be set for step increase in bandwidth limit? C1- All doms 1MBps C2- dom4 2MBps C3 - dom3 2MBps dom4 3MBps C4 - dom2 2MBps dom3 3MBps dom4 4MBps Bandwidth is limited to within 96.4% 0.96 1.92 2.91 3.87
21
21 Can the bandwidth limit for each domain be set for step decrease in bandwidth limit? C1 –dom1 4 MBps dom2 3MBps dom3 2MBps dom4 1MBps C2 –dom1,2 3MBps dom3 2MBps dom4 1MBps C3- dom1,2,3 2MBps dom4 1MBps C4- All doms 1MBps Bandwidth is limited to within 96.4% 3.87 2.91 1.92 0.96
22
22 How is the bandwidth shared for domains which are not bandwidth limited? C1 –No limit set C2 –dom5 1MBps C3- dom5 2MBps Bandwidth is fairly shared for domains that are not bandwidth limited 0.95 1.93
23
23 How does the bandwidth change with time for a step change in limit for a single domain? Bandwidth settles in 6.7 sec (on an avg) for a step change
24
24 How does the bandwidth change with time for a step change in limit for multiple domains? Bandwidth limits set for multiple domains independently
25
25 How does the CPU utilization in DomU and Dom-0 vary for different bandwidth limits for a DomU? DomU utilization increases proportionally While Dom-0 utilization does not 1.6 54.5 3.2 144.7 4.7 169.6 6.7 8.1 9.7 11.4 12.4 182.3 190.3 193.4195.2 206.6
26
26 Summary of Work Done Implemented a tool for specification and dynamic reconfiguration of bandwidth limits for VMs in Xen Modified the Netback driver of Xen to provide bandwidth limits Experimentation and Validation to verify the bandwidth limits configuration
27
27 Implementation Problems Compilation, Installation & Debugging of Xen and domains Code Exploration to understand Xen network I/O model. Not much documentation available Xenstore generally used for guest domain to domain-0 Direct calling of xenbus API in Netback were giving problems 2 Levels of call-back needed
28
28 Future Work Simplification of the algorithm to implement the bandwidth guarantee Study the effect of CPU scheduler on the bandwidth limits and guarantee Validation with realistic I/O intensive applications
29
29 THANK YOU
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.