Download presentation
Presentation is loading. Please wait.
Published byEthan Barker Modified over 8 years ago
1
VSE: Virtual Switch Extension for Adaptive CPU Core Assignment in softirq Shin Muramatsu, Ryota Kawashima Shoichi Saito, Hiroshi Matsuo Nagoya Institute of Technology, Japan
2
Background The spread of public cloud datacenters(DCs) The spread of public cloud datacenters(DCs) Multi tenancy is supported in many DCsMulti tenancy is supported in many DCs Multiple tenants’ VMs run on the same physical server An overlay protocol enables network virtualization 1
3
Overlay-based Network Virtualization 2 VM tenant A tenant B VM tenant A tenant B Encapsulates IP tunnel Traditional Datacenter Network Decapsulates Virtual Switch Virtual Switch Tunnel header Physical Server
4
Problems in Receiver Physical Servers 3 NIC core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 VM2 Driver HWIRQ SWIRQ Physical Server Load distribution is required Packet Processing
5
Receive Side Scaling(RSS) 4 core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 VM2 Queue 1Queue 2Queue 3Queue 4 Dispatcher RSS enabled NIC Protocol Stack VXLAN vSwitch Flow collision Protocol Stack VXLAN vSwitch Driver The queue number is determined by hashed values calculated from packet headers Driver Flow/VM collision
6
Tunneling protocol : VXLAN + AES (heavy-cost processing) Performance Impact of Two types of Collision 5 Packet processing load was concentrated on a particular core The core was used for packet processing instead of the VM 1. Two flows were handled on the same core 2. A flow and a VM were handled by the same core
7
Problems in Existing Models core is deterministically selected for HWIRQcore is deterministically selected for HWIRQ 6 Heavy flows can be processed on a particular coreHeavy flows can be processed on a particular core Heavy flows and VMs can be processed on the same coreHeavy flows and VMs can be processed on the same core Performance decreases
8
Proposed Model (Virtual Switch Extension) Software component for packet processing on the network driverSoftware component for packet processing on the network driver VSE determines CPU core for SWIRQ ›Current core load is considered ›Appropriate core selection VSE has an OpenFlow-based flow table ›Controllers can manage how to handle flows ›Priority flows are processed on low-loaded cores Vendor-specific functionality is not used ›Vendor lock-in problem can be avoided 7 VSE VM tenant A tenant B Network Driver OF-based flow table
9
Architectural Overview of VSE 8 VSE VM tenant A tenant B Traditional Datacenter Network Network Driver VSE VM tenant A tenant B Network Driver Flow Table Core Table Controller MatchActions L2-L4 headersSWIRQ... NumberLoadVM_ID #1... Inserts flow entries Relays
10
How VSE Works 9 core 1core 2core 3core 4 Protocol Stack VXLAN vSwitch VM1 Queue 1Queue 2Queue 3Queue 4 Driver Hash Function RSS-NIC MatchActions VM1’s flowSWIRQ:... NumberLoadVM_ID #170- #2101 #3- #45- 1080 -34 Protocol Stack VXLAN vSwitch VM’s flows can be matched VSE Flow Table Core Table VM1 is running
11
Implementation Exploiting Receive Packet Steering (RPS)Exploiting Receive Packet Steering (RPS) VSE ›determines the core for SWIRQ by using Flow and Core tables ›notifies the determined core number to RPS RPS ›executes SWIRQ to the notified core 10 VSE MatchAction VM1’s flowSWIRQ:4... RPS Protocol Stack core:4 SWIRQ:4
12
Performance Evaluation 11 40 GbE network Physical Server2 Virtual Switch Iperf Client VM1 … Physical Server1 Virtual Switch Iperf Client VM2 The network environmentThe network environment Physical Server1Physical Server2VM OSCentOS 6.5(2.6.32)ubuntu-server 12.04 CPUCore i5 (4cores)Core i7 (4cores)1core Memory16Gbytes2Gbytes Virtual SwitchOpenvSwitch 1.11- Network40GBASE-SR4- NICMellanoxConnetX(R)-3virtio-net MTU1500bytes Iperf Server VM1 core2 core1 Machine SpecificationsMachine Specifications Iperf Server VM2 VXLAN + AES UDP communication
13
Evaluation Details Evaluation modelsEvaluation models 12 ModelsFunctionHWIRQ target defaultRSS: off, VSE: offcore4 rssRSS: on, VSE: offcore1~4 vseRSS: on, VSE: oncore1~4 Iperf clientsIperf clients ProtocolUDP (Tunneling : VXLAN + AES) Packet sizes64/1400/8192bytes Total evaluation time20 minutes The number of flow generations20 times Flow duration time1 minute
14
Results : Total Throughput of Two VMs 13 All fragmented packets were handled on a single core VSE distributed the packet processing load properly VSE can appropriately distribute packet processing load All fragmented packets were handled on a single core
15
Conclusion and Future Work ConclusionConclusion We proposed VSE that distributes received packet processing load Throughput can be improved using VSE Future workFuture work implements the protocol between a controller and VSE adaptively changes SWIRQ target based on current core load 14
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.