Download presentation
Presentation is loading. Please wait.
Published byJeffrey Townsend Modified over 8 years ago
1
AVS Brazos : IPv6
2
Agenda AVS IPv6 background Packet flows TSO/TCO Configuration Demo Troubleshooting tips Appendix
3
Some Feature Details Support IPv6 for virtual machine traffic VM nics can be assigned with only IPv6, only IPv4 or both IPv6 and IPv4 (dual stack) by user All IPv6 features like icmpv6, IPv6 Neighbour Discovery that are supported by ACI fabric are supported. Can be assigned Statically, IPv6 Stateless Address Autoconfiguration (SLAAC) or DHCPv6 In VLAN mode, IPv6 packets will leave the box unchanged from what is sent by the VMs In VXLAN mode, AVS will add outer L2, L3(IPv4) and L4(UDP) to the IPv6 packet sent by VM. ESXi management vmk is IPv4 Opflex traffic vmk is IPv4 All VXLAN tunnels to upstream use IPv4 (outer header)
4
Some Feature Details AVS packet-path works in two modes: Hypervisor VM EPG App Non-Switching Mode VM EPG Web Punt to Leaf for all traffic Hypervisor VM EPG App Local Switching Mode VM EPG Web Punt to Leaf for Inter-EPG traffic Fex mode or Non-Switching (NS) mode: Policy enforcement in the iLeaf only VXLAN encap Local Switching (LS) mode: AVS handles all forwarding to local destinations within an EPG Both VLAN and VXLAN encap
5
IPv6 VXLAN Local Switching: Unicast Within EPG Between Servers W/ VXLAN W/ L2 Network 1.VM1 in EPG1 starts IPv6 unicast packet to VM2 in EPG1 2.Physical Leaf sends IPv6 unicast to other host/server that have EPG1 3.Unicast delivered to other VM2 in EPG1 4.Leaf is still the VXLAN termination point in Local Switching. AVS host to AVS host has to go through leaf. 5.In VXLAN mode, AVS will append outer L2, L3, L4 headers on the IPv6 payload EPG1 1. Unicast 2. Unicast 3. Unicast EPG2EPG1EPG2 VM1 VM2 VM VXLAN Header VXLAN Header UDP Header VM IPv6 Header VM L2 Header VM L2 Header Payload IP Header Outer L2 Header
6
IPv6 VLAN Local Switching: Unicast Within EPG Between Servers 1.VM1 in EPG1 sends IPv6 unicast packet 2.Layer 2 Network forwards directly 3.Unicast delivered to other VM2 in EPG1 4.In VLAN mode, traffic can traverse between AVS hosts. 5.In VLAN mode, IPv6 pkts will go out as they are generated from the VMs and tagged with VLAN ID EPG1 1. Unicast 2. Layer 2 Network Forwards Directly 3. Unicast EPG2 EPG1EPG2 VM1 VM2 VM L2 Header with VNID VM IPv6 header Payload
7
TCP Segmentation And Checksum Offload This is meant to offload segmentation & checksum calculation of large TCP packets over IPv6 onto the HW NIC that supports it, or else do it in SW To save CPU time Already supported for IPv4, we enabled it for IPv6 now VLAN mode: most NICs have HW support VXLAN mode: Only some NICs have support for understanding VXLAN offload and can do hw TCO/TSO for the inner IPv6 packet from VM. For other NICs we do vxlan encap and TCP tso/tco is the software.
8
Configuration On Host And APIC Check AVS chapter of Brazos ACI virtualization guide “Assigning an ip address to the cisco AVS VM network adapter” and “Assigning a gateway address for the VMs connected to the Cisco AVS using the Cisco APIC GUI” Setup IPv6 address on VM Setup the ipv6 gateway under the BD or EPG as desired, and few more config therein… This will be covered in the demo Usual linux guest OS MTU is 1500. it can be changed but do not have more than MTU=8950 for vxlan mode (set on the APIC).
9
Troubleshooting Tips Follow same path as used to trace IPv4 VM packets Ensure config is properly done on VM and APIC side Make sure opflex is up and EPG properly downloaded, run the usual AVS troubleshooting for these Duplicate address detection DAD is enabled on TOR by default. To capture IPv6 packets use the following guideline Locate the LTL of the VM port which is sending the traffic or destined for the traffic Locate the LTL of the Port channel which is the uplink # vemcmd show port LTLVSM PortAdmin LinkStateCausePC-LTLSGIDORGsvcpathTypeVem Port 30Eth1/21UPDOWNBLK-0100vmnic1 44Eth1/31UPUPFWD-90200vmnic2 54UPUPFWD-0200vmk1 56UPUPFWD-000Fedora-213-1-nping.eth1 58UPUPFWD-000Fedora-213-2-nping.eth1 90PoUPUPFWD-000 ~ #
10
Example VM port LTL 51 Uplink port channel: LTL 1040 VM AVS Vempkt size capture 128 Vempkt start Capture VM ipv6 TX packet: vempkt capture ingress ltl 51 Capture VM ipv6 RX packet: vempkt capture egress ltl 51 Capture VXLAN/VLAN+ipv6 TX packet: vempkt capture egress ltl 1040 Capture VXLAN/VLAN+ipv6 RX packet: vempkt capture pre-ingress ltl 1040 Give enough time / trigger your packets of interest Vempkt stop Vempkt pcap export // will add extension.pcap Can copy the.pcap out of box and look at it via Wireshark, choose option to decode vxlan if required Too many TCP retransmits is a sign that something is not right. It reduces the useful traffic.
11
Appdenix – TCO and TSO When enabled the VM may send out (large TCP packets + overhead) > pnic MTU size and expects the “nic” to segment it So AVS has to either do it in SW or have the HW do it Large pkt (needs TSO), we check if hw can do VXLAN,TSO & TCO and offload if possible to HW, else do in SW Smaller pkt – we offload TCO to hw if available. No TSO needed for such packets. Customers deploy mostly in vxlan mode AVS sw will get notified by Vmware sw if NIC on each port has HW support for vxlan, if so it can offload the TSO/TCO to the NIC.
12
Appdenix – TCO and TSO (cont.) By Default the Vmware ESXi enables support for HwTSO and IPv6 checksum calculation offload # esxcli system settings advanced list -o /Net/UseHwTSO Path: /Net/UseHwTSO Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: When non-zero, use pNIC HW TSO offload if available ~ #
13
Appdenix – TCO and TSO (cont.) # esxcli system settings advanced list -o /Net/UseHwCsumForIpv6Csum Path: /Net/UseHwCsumForIPv6Csum Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: When non-zero, use pNIC HW_CSUM, if available, as IPv6 csum offload # esxcli system settings advanced list -o /Net/UseHwIpv6Csum Path: /Net/UseHwIPv6Csum Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: When non-zero, use pNIC HW IPv6 csum offload if available
14
Appdenix – TCO and TSO (cont.) Can check if tso and tco status on uplink vmnics also, by default they will be on. But need to refer to NIC documentation if it has VXLAN decode ability # ethtool --show-offload vmnic1 Offload parameters for vmnic1: Cannot get device udp large send offload settings: Function not implemented Cannot get device generic segmentation offload settings: Function not implemented rx-checksumming: on tx-checksumming: on scatter-gather: on tcp segmentation offload: on udp fragmentation offload: off generic segmentation offload: off
15
Appdenix – TCO and TSO (cont.) In most typical linux guest VMs the TSO and TCO will be on by default, can be checked using the ethtool –show-offload as for the vmnics For windows vm also users can enable/disable tso – consult latest vmware and windows documentation on how to use tso in windows VMs MTU of guest VM can be kept as default 1500 or changed up but keep it below 8950(for vxlan AVS mode) to avoid packet loss due to vxlan encap MTU of vmnics on the ESXi host can left unchanged (usually 9000) For good performance vmxnet3 adapter can be used. In lab network using VM1 src (host1) to VM2 dest (host2) and using iperf to send unicast ipv6 traffic, without TSO we have seen around 1gbps on a single stream, with sw or hw TSO activated, it enhances to around 2 gbps. tso is only useful for large segments, if a flow does not use such segments due to nature of the use case, then tso does not kick in. An example of nic card having vxlan + tco/tso offload support is the Emulex oce14000 family. Some of these need their own drivers to be installed and additional ESXi config – consult vendor and vmware documentation
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.