Presentation is loading. Please wait.

Presentation is loading. Please wait.

6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:

Similar presentations


Presentation on theme: "6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:"— Presentation transcript:

1

2

3 6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle: Scalable Virtual IPsec Aggregation for Road Warriors and Branch Offices

4 Scalable Virtual IPsec Aggregation
IPsec traffic Clear traffic Private Cloud Virtual IPsec Aggregator on COTS Server Bare metal performance in a virtual IPsec Gateway Optimize resource utilization to increase throughput and scale services Replace expensive, proprietary hardware The test will show the advantages of using a DPDK – enabled solution. We will showcase an IPsec aggregation solution where the IPsec aggregator is terminating thousands of tunnels and demonstrate a wirespeed performance with10GE Secure remote access with IPsec IPsec clients of all types 1,000’s of IPsec tunnels 10 Gbps of aggregated traffic IPsec aggregator in the cloud IPsec Aggregator Standard servers with virtual IPsec applications Right-size performance for optimized resource utilization to support service chaining and integration 6WIND solution: Turbo IPsec + Virtual Accelerator Virtual solution with bare metal performance Turbo IPsec: scalable encryption software Virtual Accelerator: hypervisor acceleration VM agnostic 4

5 6WIND IPsec VPN for Road Warriors and Branch Offices Testbed
Virtual Machine Turbo IPsec Simulates 5,000 remote clients each one with it’s own IPsec tunnel Hypervisor Remote Users Bare Metal Turbo IPsec Virtual Accelerator All traffic originates and terminates on the IXIA test generator across two 10GE ports. Traffic is encrypted using a Turbo IPsec appliance on bare metal. Traffic is decrypted using a Turbo IPsec appliance on a Linux hypervisor with 6WIND Virtual Accelerator. OpenStack is used to create the instances that we will test. 10GE 5,000 IPsec tunnels Server A: Client simulator Server B: Compute Node 10GE 10GE Traffic Generator simulates 5,000 clients Traffic Generator terminates 5,000 flows

6 Simulating 5000 IPsec Tunnels: Testbed Configuration and Scenarios
Equipment Compute Node Scenarios Traffic Generator 2 x 10GE 5,000 simulated IPsec tunnels (users/flows) Terminates both ends of flow IMIX packet size Uni-directional flow Compute Node (SUT) Simulates IPsec aggregator 5,000 tunnels 1 socket, 12 cores Scenario 1 Linux VM with IPsec application Hypervisor with Open vSwitch Scenario 2 6WIND Turbo IPsec VM Scenario 3 Hypervisor with Open vSwitch and Virtual Accelerator installed Simulating 5000 IPsec Tunnels: Testbed Configuration and Scenarios The goal of the testing is to find the highest output with optimal resource allocation. Finding this sweet spot will help size hardware platforms based on throughput per VNF/VM, and vcpus per VNF/VM requirements. In the end it is about right-sizing the platform for the optimal performance at the right price-point. Due to the overhead of the ESP encapsulation protocol, 10 Gbps of encrypted traffic will generate approximately 7.5 Gbps of “clear” traffic. Encryption and authentication is AES 128B and hmac/sha1 Three test scenarios System Information     Manufacturer: Dell Inc.     Product Name: PowerEdge R530

7 Scalable and Cost-Effective Virtual IPsec Aggregation
Save Costs Ownership costs drastically lowered Improved ROI on infrastructure costs Use spare cores To serve more users with more bandwidth To integrate IPsec with added value services on the same server Hypervisor Virtual Machine Scenario 1 Scenario 2 Scenario 3 Linux IPsec Encrypted Traffic @ 10 Gbps 10 Gbps 2 Mbps per user 8 spare cores out of 12 Clear Traffic 6 Gbps 1.2 Mbps per user No spare cores 2 Gbps <500 Kbps per user No spare cores 5,000 IPsec Tunnels Linux OVS Linux OVS 10G Wirespeed Dell PowerEdge R530 server; Intel(R) Xeon(R) CPU E GHz; 64 GB RAM; 12 cores in Compute Node The system under test receives 10 Gbps of encrypted traffic across a 10GE link running 5,000 IPsec tunnels. The SUT unencapsulates and de-crypts the traffic and forwards the “clear” out a 10GE link. The first test includes Linux OVS and Linux IPsec. The bottleneck is in the Linux IPsec VM and we get 2 Gbps throughput, which is <500Kbps per user. The second test is for Linux OVS and 6WIND Turbo IPsec and the bottleneck moves to Linux OVS in the hypervisor. We get 6 Gbps throughput, which is 1.2 Mbps per user. The third test adds 6WIND Virtual Accelerator to the hypervisor with 6WIND Turbo IPsec in the VM. All bottlenecks are removed and we get 10 Gbps, which is wirespeed and 2 Mbps per user. This is acheived with 8 spare cores in the system to run other applications or services. Compute Node

8 Summary of Results 5x Scenario 1 Scenario 2 Scenario 3
Linux VM with IPsec application Hypervisor with Open vSwitch Scenario 2 6WIND Turbo IPsec VM Scenario 3 6WIND Turbo IPsec VM Hypervisor with Open vSwitch and Virtual Accelerator installed Scenario 1 Scenario 2 Scenario 3 Throughput (Gbps) (% of linerate) 2.3 23% linerate 6.1 61% linerate 10 100% linerate Packet size (B) IMIX Number of vCPU to VM1 7 5 Number of Linux vSwitch or VA cores 122 2 Spare cores3 None 8 5x Actual vCPU utilization for VMs may be less, dependent on system resource utilization. Hypervisor core usage varies with system resource utilization and can affect VM performance. There is a problem today in virtualization. While the OpenStack does allow users to select the number of vCPUs assigned to the VM through the use of “flavors,” there is no “pinning” of the vCPUs to the VM. The VM uses the equivalent of the assigned vCPUs as long as sufficient system resources are available. All VMs and the hypervisor share the physical resources of the system. Memory is shared, IO is shared and CPU cycles are shared. In fact the physical resources can be oversubscribed with the hypervisor managing the oversubscription. This is done by design since dedicating CPU cycles and memory to VMs is not practical and would limit the use of virtualization. For the majority of time, the VM uses only a portion of the resources. Mechanisms in the hypervisor allow for the oversubscription and sharing of resources. Therefore, although a VM is assigned 7 vCPUs for example, it is still a shared resource and may be reduced due to other resource commitments in the system. The same applies to memory. In our particular case since we only have one vm, the hypervisor is competing with the vm for system resources. It’s not hard to determine which one will preempt the other . The tests done here have been tuned to find the optimum throughput for the IPsec scenario. In other words, we try to find the right balance of cpu/performance in each of the three scenarios. We found that at a certain point, adding vcpus to the vm actually decreases performance which indicates we are past optimal performance point. By checking vCPU utilization in the hypervisor and the vm, we can determine where the bottleneck is located (vm or hypervisor) The goal is to maximize throughput. 1- Actual vCPU utilization for VMs may be less, dependent on overall resource utilization. 2- Hypervisor core usage varies with system resource utilization and can affect VM performance. 3 – 12 cores in the system

9 6WIND – Thank you Backup slides after this point


Download ppt "6WIND MWC IPsec Demo Scalable Virtual IPsec Aggregation with DPDK for Road Warriors and Branch Offices Changed original subtitle. Original subtitle:"

Similar presentations


Ads by Google