Presentation is loading. Please wait.

Presentation is loading. Please wait.

SR-IOV Hands-on Lab Rahul Shah Clayne Robison.

Similar presentations


Presentation on theme: "SR-IOV Hands-on Lab Rahul Shah Clayne Robison."— Presentation transcript:

1 SR-IOV Hands-on Lab Rahul Shah Clayne Robison

2 NFV Models – DPDK SRIOV usage

3 Intel® Ethernet XL710 Family (I/O Virtualization)

4 Hands-On Session (SRIOV)

5 SR-IOV Hands-on: System setup
Compute Node—Phase 1 VM with SR-IOV Virtual Function Passthrough DPDK testpmd Host OS Port 0 (SRIOV-ON) I40e PF0 loopback 2x10G DPDK pktgen Pass-through Port VF0 Port 1 (SRIOV-ON) I40e PFO VF1 Compute Node—Phase 2 VM with Physical Function Passthrough DPDK testpmd Host OS Port 0 (SRIOV-OFF) loopback 2x10G DPDK pktgen Pass-through Port Port 1 (SRIOV-OFF) PF0 PF1 pci-stub/ vfio pci-stub/ vfio This setup demonstrates SR-IOV + DPDK vs Phys + DPDK. Only one demo per compute node. However, if we use scripts to change configuration, we could demonstrate both on one VM, rather than two

6 SR-IOV-Lab: Connect to Virtual Machine
SSH from your laptop1 in to Cluster Jump Server2 IP Address: SSH v2 preferred Username: student<1-50> (e.g. Password: same as username (e.g. student9) Repeat so that you have multiple connections to the Jump Server2 SSH from Cluster Jump Server2 in to assigned HostVM3 $ssh HostVM-____ Username: user; Password: user 3. HostVM 2. Cluster Jump Server 1. Your Laptop Only Even Numbered HostVMs can do the SR-IOV lab. Note: You need two 2 ssh sessions into the jump server.

7 Prepare Compute Node for I/O Pass-through
IOMMU support is required for VF to function properly when assigned to VM. The following boot parameter is required to enable IOMMU support for Linux kernels (Done) Before booting compute node OS, enable Intel VT features in BIOS (Done) Append “intel_iommu=on” to the GRUB_CMDLINE_LINUX entry in /etc/default/grub (Done) Update the compute node grub configuration using grub2-mkconfig command $sudo grub2-mkconfig –o /boot/grub2/grub.cfg (Done) Reboot the compute node for the iommu change to take effect Check the compute node kernel command line in Linux and look for “iommu=on” $cat /proc/cmdline. Note: Anything marked “Compute node” requires root access to the bare-metal. Because attendees don’t have root access on bare metal, these steps will only be demonstrated. Steps marked virtual machine are steps that are taken by the attendees.

8 Create Virtual Functions
Linux does not create VFs by default. The X710 server adapter supports upto 32 VFs per port. The XL710 server adapters supports up to 64 VFs per port. On the compute node, create the Virtual Functions: # echo 4 > /sys/class/net/[INTERFACE NAME]/device/sriov_numvfs (for kernel versions 3.8.x and above) (For Kernel versions 3.7.x and below, to get 4 VFs per port) #modprobe i40e max_vfs=4, 4 On the compute node, verify that the Virtual Functions were created: # lspci | grep ‘X710 Virtual Function’ On the compute node, bring up the link on the virtual functions # ip l set dev [INTERFACE NAME] up You can assign a MAC address to each VF on the compute node. # ip l set dev enp6s0f0 vf 0 mac aa:bb:cc:dd:ee:00 Upon successful VF creation, the Linux operating system automatically loads the i40 vf driver. 4. During the creation of user-defined number of VFs, the i40e driver assigns MAC address 00:00:00:00:00:00 to each VF. The Intel i40edriver has built in security feature that allows system administrators to assign a valid MAC address to a VF from within the host operating system. 5. Once this is done, the VM that has the VF assigned to it is not allowed to change the VF MAC address from within the VM #ip link set ens787f0 vf 0 mac aa:bb:cc:dd:ee:ff

9 Prepare Hypervisor – KVM/libvirt Method
To simplify integration with VMs, SR-IOV Virtual Functions can be deployed as a pool of NICs in a libvirt network. Compute node: Create an XML fragment/file that describes an SR-IOV network: <network> <name>sr-iov-enp6s0f0</name> <forward mode='hostdev' managed='yes'> <pf dev=’[INTERFACE NAME]'/> </forward> </network> Compute node: Use virsh to create a network based on this XML fragment # virsh net-define <sr-iov-network-description.xml> Compute node: Activate the network. # virsh net-start sr-iov-enp6s0f0

10 Prepare and Launch the VM image – KVM/libvirt Method
On the compute node, once you have a libvirt network based on the SR-IOV virtual functions, Add a NIC from that network Create an XML fragment/file that describes the NIC, and optionally add a MAC address <interface type='network'> <mac address=’aa:bb:cc:dd:ee:ff'/> <source network='sr-iov-enp6s0f2'/> </interface> Use #virsh edit [VM NAME] to insert the XML fragment into the VM Domain definition <controller type='virtio-serial' index='0'> <address …/> </controller> <mac address=’aa:bb:cc:dd:ee:ff’/> Launch the VM #virsh start [VM NAME]

11 Prepare the Hypervisor--QEMu Method
Using qemu directly is not as elegant, but it works as well Get the PCI Domain:Bus:Slot.Function information for the VF # lshw –c network -businfo Load the pci-stub driver if necessary # modprobe pci-stub Unbind the NIC PCI device from the i40e driverFollow the below steps to passthrough each VF port in VM # echo " c" > /sys/bus/pci/drivers/pci-stub/new_id # echo [PCI Domain:Bus:Slot.Function] > /sys/bus/pci/devices/[PCI Domain:Bus:Slot.Function]/driver/unbind # echo [PCI Domain:Bus:Slot.Function] > /sys/bus/pci/drivers/pci- stub/bind

12 Launch the image – Qemu Method
Start the virtual machine by running the following command # qemu-system-x86_64 –enable-kvm \ –smp 4 –cpu host –m 4096 –boot c \ –hda [your image] \ -nographic –no-reboot \ -device pci-assign, host=[VF PCI Bus:Slot.Function] \ -enable-kvm = Enable KVM full virtualization support -m = memory to assign -smp = number of smp cores -boot = boot option -hda = virtual disk image -device = device to attach -cpu = select the cpu_model to emaulate in a virtual machine(host = use the same cpu_model equivalent to the host cpu)

13 Install DPDK on the SR-IOV VF on the Virtual Machine
Now that the compute node has been set up, get the virtual machine ready From the Jump Server, ssh into your assigned Virtual Machine ($ssh HostVM-____) Scripts for the lab are located in /home/user/training/sr-iov-lab. View the Virtual Functions that have already been loaded into the Virtual Machine. $ ./00_show_net_info.sh Steps are only necessary if the DPDK lab was done before the SR-IOV lab Compile DPDK and load it onto the Virtual Functions $ ./04_build_load_dpdk_on_vf.sh Write down the MAC addresses that were displayed in the previous step ( TESTPMD_MAC=___:___:___:___:___:___ PKTGEN_MAC =___:___:___:___:___:___

14 Build and Run Testpmd and PKTGEN
On the virtual machine, build and run testpmd. Look at the parameters in the build script. Running testpmd requires that you know the MAC address of the port on which pktgen is going to run. This was output to the console in step 04. # 05_build_start_testpmd_on_vf.sh [PKTGEN MAC] Look at the command line to see what parameters we are using. Open another SSH session into your assigned virtual machine (HostVM-____) Build and launch pktgen. # 06_build_start_pktgen_on_vf.sh You need to know the MAC address of the port where pktgen is going to send packets, which is the port on which testpmd is waiting. You can find the testpmd port when you launch testpmd. You’ll see lines that look like this: Configuring Port 0 (socket 0) Port 0: 52:54:WW:XX:YY:ZZ (This is the testpmd MAC address) Checking link statuses...Port 0 Link Up - speed Mbps - full-duplex Note: You can also get the testpmd MAC address from step 04 Allow CRC stripping. In a VM, using a VF, we can’t disable CRC stripping. Edit pktgen-port-cfg.c and change line 94 to “.hw_strip_crc = 1,” #vi /usr/src/pktgen /pktgen-port-cfg.c Note: testpmd also has this problem, but we take care of it on the command line: --crc-strip

15 Generate and measure Traffic Flow with SR-IOV
Now that we have pktgen and testpmd launched, start the traffic 1. In pktgen, set the mac 0 address to point to the testpmd SR-IOV port > set mac 0 [TESTPMD MAC] Start generating traffic > start 0 View stats in testpmd > show port stats 0 Record the RX and TX info to compare with Physical PCI passthrough Mbit/s RX:___ TX:___ PPS RX:___ TX:___

16 Prepare for Physical PCI Passthrough
Close pktgen > quit Close testpmd Unload all drivers from the SR-IOV virtual functions # 07_unload_all_x710_drivers.sh Watch the PCI pass-through NICs appear $ 08_wait_for_pf.sh

17 Build and Run Testpmd and PKTGEN on Physical Functions
Build and load DPDK on the Physical Function NICs. Note the MAC addresses that are displayed at the end #09_build_load_dpdk_on_pf.sh Build and run testpmd on the PF. Running testpmd requires that you know the MAC address of the port on which pktgen is going to run. This was output to the console in step 09. Note that they are different addresses than the SR-IOV VF. Why? # 10_build_start_testpmd_on_pf.sh [PKTGEN MAC] Build and launch pktgen in your other ssh session. # 11_build_start_pktgen_on_pf.sh In pktgen, set the mac 0 address to point to the testpmd SR-IOV port > set mac 0 [TESTPMD MAC] Start generating traffic > start 0 View stats in testpmd > show port stats 0 Record the RX and TX info. How does it compare with SR-IOV Virtual Function passthrough? Unload all XL710 drivers: # 12_unload_all_x710_drivers.sh TESTPMD_MAC=___:___:___:___:___:___ PKTGEN_MAC =___:___:___:___:___:___ Mbit/s RX:___ TX:___ PPS RX:___ TX:___

18 Backup information SRIOV Pool (VF) x Queue
PF-VF Mailbox Interface Initialization Mailbox Message support – DPDK IXGBE PMD vs Linux ixgbe Driver SRIOV L2 Filters and Offloads

19 Intel® X520 (10G) vs XL710 (40G) SRIOV Internals

20 Intel® Ethernet XL710 Filters
Filters available to select PF/VF for packet delivery within a LAN port based on Ether Type, MAC/VLAN and S/E-Tag Filters available to select a VSI for packet delivery within a PF/VF (PCI function) Filters available to select an Rx/Tx Queue for packet delivery within a VSI (Pool of Queues) Explicit flow configuration using Flow Director for packet steering to a VSI/RxQueue

21 Reference Intel® Network Builders University - Basic Training – Course 3: NFV Technologies Basic Training – Course 5: The Road to Network Virtualization Basic Training – Course 6: Single Root I/O Virtualization DPDK – Course 1-5: DPDK API and PMDs Data Plane Development Kit – dpdk.org Intel Open Source Packet Processing - Intel® Resource Director Technology (Intel® RDT) Intel® QAT Drivers

22 Questions..!? Sessions coming up…
DPDK Performance Analysis with VTune Amplifier DPDK Performance Benchmarking DPDK Hashing Algo. Support

23 Creating VF – using DPDK pf driver - Backup
To use the port for DPDK, you need to bind it to igb_uio driver. You can check all the ports using following # ./tools/dpdk_nic_bind.py --status 2. To bind the ports to igb_uio driver # ./dpdk_nic_bind.py –b igb_uio 06:00.0 # echo 4 > /sys/bus/pci/devices/0000\:bb\:dd.ff/max_vfs 4. You can test the VFs created using the same lspci command before.

24 DPDK PF driver – HoST CP/DP - Backup
Run the DPDK PF driver in the host using testpmd application on a single PF port # ./x86_64-native-linuxapp-gcc/app/testpmd -c 0xF -n i --portmask=0x1 --nb-cores=2 <testpmd> set fwd mac <testpmd> start <testpmd> show port stats 0 – [optional]


Download ppt "SR-IOV Hands-on Lab Rahul Shah Clayne Robison."

Similar presentations


Ads by Google