Download presentation
Presentation is loading. Please wait.
1
Mouli Vytla Samar Sharma Rajendra Thirumurthi
ITD Overview Mouli Vytla Samar Sharma Rajendra Thirumurthi
2
ITD: Multi-Terabit Load-balancing with N5k/N6k/N7k
ASIC based L4 load-balancing at line-rate Every N7k port can be used for load-balancing Redirect line-rate traffic to any devices, for example web cache engine, Web Accelerator Engine (WAE), WAAS, VDS-TC, etc. No service module or external L4 load-balancer needed Provides IP-stickiness, resiliency (like resilient-ECMP) NAT (available for EFT). Allows non-DSR deployments. Weighted load-balancing Nexus 5k/6k (EFT/PoC) Provides the capability to create clusters of devices, for e.g., Firewalls, IPS, or Web Application Firewall (WAF) Performs health monitoring and automatic failure handling Provides ACL along with redirection and load balancing simultaneously. Order of magnitude reduction in configuration and ease of deployment The servers/appliances don’t have to be directly connected to N7k Supports both IPv4 and IPv6
3
ITD Deployment example
Redirect loadbalance ACL to select traffic ITD Clients Select the traffic destined to VIP Po-5 Po-6 Po-7 Po-8 Note: the devices don’t have to be directly connected to N7k
4
ITD feature Advantages slide 1 of 3
Scales to large number of Nodes Significant reduction of Configuration Complexity eg, 32 node cluster would require ~300 configuration lines without ITD ITD configuration requires only 40 lines N + M redundancy. Health Monitoring of servers/appliances DCNM Support IP-stickiness, resiliency Supports both IPv4 and IPv6, with VRF awareness Zero-Touch Appliance deployment No certification, integration, or qualification needed between the appliances and the Nexus 7k switch.
5
ITD feature Advantages slide 2 of 3
Simultaneously use heterogeneous appliances (different models / vendors) Flow coherent symmetric traffic distribution Flow coherency for bidirectional flows. Same device receives the forward and reverse traffic Traffic Selection: ACL VIP/Protocol/Port Not dependent on N7k HW architecture Independent of Line-card types, ASICs, Nexus 7000, Nexus 7700, etc. Customer does not need to be aware of “hash-modulo”, “rotate” options for Port-Channel configuration ITD feature does not add any load to the supervisor CPU ITD uses orders of magnitude less hardware TCAM resources than WCCP
6
ITD feature Advantages slide 3 of 3
CAPEX : Wiring, Power, Rackspace and Cost savings Automatic Failure Handling Dynamically reassign traffic (going towards failed node) to Standby node No manual configuration or intervention required if a link or server fails Migration from N7000 to N7700 and F3 Customer does not need to be concerned about upgrading to N7700 and F3 ITD feature is hardware agnostic, feature works seamlessly after upgrade Complete transparency to the end devices Simplified provisioning and ease of deployment Debuggability: ITD doesn't have WCCP-like handshake messages The solution handles an unlimited number of flows
7
Why & Where Do We Need This Feature Network Deployment Examples
8
ITD use-cases Use with clustering (Services load-balancing)
Eg, Firewall, Hadoop/Big Data, Web application Firewalls (WAF), IPS, load- balance to Layer 7 load-balancers. Redirecting Eg. Web accelerator Engines (WAE), Web caches Server Load-balancing Eg, application servers, web servers, VDS-TC (Video transparent caching) Replace PBR Replace ECMP, Port-channel DCI Disaster Recovery Please note that ITD is not a replacement for Layer-7 load-balancer (URL, cookies, SSL, etc).
9
ITD Use-case: Clustering
Performance gap between Switch and Servers/Appliances Appliance vendors try to scale capacity by stacking or clustering. Both models have deficiencies Stacking Solution (port-channel, ECMP) drawbacks: Manual configuration with large number of steps Application level node failure not detected Ingress/Egress Failure handling across pair of switches requires manual intervention Traffic black-holing can easily occur. Doesn’t scale for large number of nodes Clustering solution drawbacks: Redirection of traffic among cluster nodes Doesn’t scale typically above 8 nodes Dedicated control link between nodes Dedicated port(s) reserved on each node for control link traffic Very complex to implement and debug
10
ITD comparison with Port-channel, ECMP, PBR
Feature/Benefit Port Channel ECMP PBR ITD Link Failure detection ✓ Appliance/server failure detection ✗ Weighted load-balancing NAT ✓(soon) VIP, advertisement Auto re-configuration of N7k (s) in case of failures Hot standby support – N+M redundancy Resilient: Non-Disruptive to existing flows Quick failure detection/convergence Max # of nodes for scaling 16 256 Ease of configuration, troubleshooting Deployment complexity (complex) (simple) Avoid Traffic Black-holing in Sandwich Mode Topology Adaptive flow distribution, auto-sync for bi-directional flow coherency post 6.2(10) Cisco Confidential
11
ITD use-case : Web Accelerator Engines
Traffic redirection to devices such as web caches, Video caches Appliance vendors try to redirect using WCCP or PBR. Both models have deficiencies WCCP Solution drawbacks: Appliance has to support WCCP protocol Explosion in the number of TCAM entries due to WCCP Complex protocol between switch and appliance Troubleshooting involves both switch and appliance User cannot choose the load-balancing method Appliances have to be aware of health of other appliances. Supervisor CPU utilization becomes high Only IPv4 supported on N7k PBR solution drawbacks: Very manual and error prone method Very limited probing No automatic failure detection and correction (failaction) Doesn't scale PBR drawback : can’t have reassign as failaction policy
12
ITD comparison with WCCP
Feature/Benefit N7k WCCP N7k ITD Appliance is unaware of the protocol No Yes Protocol support IPv4 IPv4, IPv6 Number of TCAM entries (say, 100 SVI, 8 nodes, 20 ACEs) Very High 16000 Very low 160 Weighted load-balancing User can specify which bits to use for load-balancing Number of nodes 32 256 Support for IPSLA probes Support for Virtual IP Support for L4-port load-balancing Capability to choose src or dest IP for load-balancing Customer support needs to look at switch only, or both the switch and appliance Both Switch only Adaptive flow distribution Yes (post 6.2.8) Sup CPU Overhead High None Egress ACL DCNM Support
13
ITD use-case : Server Load-Balancing
Server migration from 1G to 10G Largest load-balancers today can support ~100G Large data centers need multi-Terabit load-balancing ITD can perform (ACL + VIP + Redirection + LB) on each packet at line- rate. ITD also provides support for advertising the VIP to the network. ITD allows wild-card VIP and L4 port number Server health monitoring Eg, Load-balance traffic to 256 servers of 10G each. Weighted Load balancing to distribute load proportionately
14
ITD comparison with Traditional Load-balancer
Feature/Benefit Traditional L4 load-balancer ITD Number of moving parts External appliance needed No appliance or service module needed Hardware Typically Network processor based ASIC based 10G Server migration Doesn’t scale Scales well Bandwidth ~100 Gb ~10 Tb User can specify which bits to use for load-balancing Typically No Yes ACL + VIP + Redirection + LB Performance Degradation Line-rate Customer support needs to look at switch only, or both the switch and appliance Both Switch only Wiring, Power, Rackspace, Cost Extra Not needed
15
ITD Clustering: one-ARM mode Topology
src-ip loadbalance ITD Po-5 Po-6 Po-7 Po-8 Clients Note: the devices don’t have to be directly connected to N7k
16
ITD Clustering: Sandwich Mode topology
N7k-1 N7k-2 Outside Inside ITD dst-ip loadbalance src-ip loadbalance Clients Configure ITD service for each network segment – one for outside network and another for inside network Configure src-ip load distribution scheme for ITD service on ingress interface for traffic entering outside network from Internet Configure dst-ip load distribution scheme for ITD service on ingress interface for traffic entering inside network from servers
17
ITD Clustering: Sandwich Mode with NAT
N7k-1 N7k-2 Outside Inside ITD dst-ip loadbalance src-ip loadbalance Src IP = VIP Dest IP = Client Src IP = client IP Dest IP = RS Src IP = Client Dest IP = VIP Src IP = RS Dest IP = Client Configure ITD service for each network segment – one for outside network and another for inside network Configure src-ip load distribution scheme for ITD service on ingress interface for traffic entering outside network from Internet Configure dst-ip load distribution scheme for ITD service on ingress interface for traffic entering inside network from servers Clients External Internal Mobile dev
18
ITD Clustering: Sandwich Mode (two VDCs)
Outside Inside ITD VDC 1 VDC 2 src-ip loadbalance dst-ip loadbalance Clients ITD Configure ITD service for each network segment – one for outside network and another for inside network Configure src-ip load distribution scheme for ITD service on ingress interface for traffic entering outside network from Internet Configure dst-ip load distribution scheme for ITD service on ingress interface for traffic entering inside network from servers Clients
19
ITD Clustering: one-ARM mode, VPC Topology
N7k-1 N7k-2 ITD User needs to configure ITD service similarly on each N7k. The ITD service configuration needs to be done manually on each N7k.
20
ITD Load-balancing: VIP mode
Po-1 Clients Loadbalancing VIP: Po-2 Po-3 Load Distribution: src-ip based LB scheme VIP address has to be configured as loopback address on server ARP for VIP needs to be disabled on server Cisco Confidential
21
ITD: Load-balance selective Traffic (ACL + VIP + Redirect + LB)
Src-IP loadbalance ACL to select traffic ITD Clients Select the traffic destined to VIP Po-5 Po-6 Po-7 Po-8 Web-cache/video-cache/CDN
22
Traditional Data center (without ITD)
Outside Inside Firewall LB Server L4 LB Clients Server L4 LB Web servers App servers Configure ITD service for each network segment – one for outside network and another for inside network Configure src-ip load distribution scheme for ITD service on ingress interface for traffic entering outside network from Internet Configure dst-ip load distribution scheme for ITD service on ingress interface for traffic entering inside network from servers
23
ITD enabled Data center
Firewall LB Clients Web servers App servers Server L4 LB ITD Configure ITD service for each network segment – one for outside network and another for inside network Configure src-ip load distribution scheme for ITD service on ingress interface for traffic entering outside network from Internet Configure dst-ip load distribution scheme for ITD service on ingress interface for traffic entering inside network from servers
24
N7K ITD: NAT with VIP 1 2 3 4 Cisco Confidential ITD Clients Po-1 Step
Loadbalancing VIP: 1 2 Po-1 3 4 Step dst-mac src-mac src-ip dst-ip 1 N7K MAC Router MAC 2 Server MAC N7K MAC 3 N7K MAC Server MAC 4 Router MAC N7K MAC Cisco Confidential
25
N7K ITD: NAT With VIP Port
Client-1: ITD Po-1 Clients VIP TCP80 VIP TCP443 1 2 Client-2: 4 3 NAT for Client-1: NAT for Client-2: dst-mac src-mac src-ip dst-ip dst-mac src-mac src-ip dst-ip 1 N7K MAC Router MAC TCP 80 1 N7K MAC Router MAC TCP443 2 Server MAC N7K MAC TCP 80 2 Server MAC N7K MAC TCP443 3 N7K MAC Server MAC TCP 80 3 N7K MAC Server MAC TCP 443 4 Router MAC N7K MAC TCP 80 4 Router MAC N7K MAC TCP 443 Cisco Confidential
26
N7K ITD: NAT configuration:
itd device-group webserver node ip node ip itd test device-group webserver virtual ip tcp 80 virtual ip tcp 443 nat destination no shut Note: For reverse NAT translation (server IP to VIP), ITD uses the protocol/port configured part of VIP to match the reverse traffic(server to client). This allows rest of the server to server, as well as server to client traffic can work independently.
27
ITD Clustering: Use with VMs
Web Server Clients ITD VLAN 2000 e3/1 Cisco UCS vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch VLAN 2000 Cisco Confidential
28
Feature Specs & Details
29
ITD Feature Sizing Resource Type Max Limit Nodes per Device Group 256
Ingress Interfaces per ITD service 512 VIP per ITD Service 16 Probes per VDC 500 Number of ITD Services per VDC 32 ITD Services per N7k 32 x (#of VDCs) Note : These are for 6.2(10) NX-OS release.
30
Configuration & Troubleshooting
31
ITD: Enabling Feature [no] feature itd Command Syntax:
Executed in CLI config mode Enables/Disables ITD feature N7k# conf t Enter configuration commands, one per line. End with CNTL/Z. N7k(config)# feature itd N7k# sh feature | grep itd itd enabled
32
ITD: Service Creation steps
Three Primary steps to configure an ITD Service Create Device group Create ITD service Attach Device group to ITD Service NOTE: ITD is a conditional feature and needs to be enabled via “feature itd” EL2 license required
33
ITD: Creating a Device group
Provide a template to group devices. Device Group contains: Node IP address Active or Standby mode of a node. Probe to use for health monitoring of node N7k(config)# itd device-group FW-INSPECT Creating a device group N7k(config-device-group)# node ip Configuring an active node N7k(config-device-group)# node ip mode hot-standby Configuring standby node N7k(config-device-group)# probe ? icmp ITD probe icmp tcp ITD probe tcp udp ITD probe udp dns ITD DNS probe N7k(config-device-group)# probe icmp frequency 10 retry-count 5 timeout 3 N7k(config-device-group)# probe tcp port 80 frequency 10 retry-count 5 timeout 5 N7k(config-device-group)# probe udp port 53 frequency 10 retry-count 5 timeout 5 Note: for TCP/UDP probes, destination port number can be specified
34
ITD: Configuring Device Group
Command Syntax: [no] itd device-group <device-group-name> Executed in CLI config mode Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip
35
ITD: Configuring Device Group w/ group-level standby
Command Syntax: [no] itd device-group <device-group-name> Executed in CLI config mode Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip mode hot-standby
36
ITD: Configuring Device Group w/ node-level standby
Command Syntax: [no] itd device-group <device-group-name> Executed in CLI config mode Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip standby N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip
37
ITD: Configuring Device Group w/ weights for load distrbution
Command Syntax: [no] itd device-group <device-group-name> Executed in CLI config mode Creates/Deletes Device Group N7k(config)# feature itd N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip weight 2 N7k(config-device-group)# node ip weight 4 N7k(config-device-group)# node ip N7k(config-device-group)# node ip
38
ITD: Configuring Probe
Command Syntax: [no] probe icmp [ frequency <freq> | timeout <timeout> | retry-count <retry-count>] [no] probe [tcp | udp] <port-num> [ frequency <freq> | timeout <timeout> | retry-count <retry-count> ] Executed in CLI config mode Executed as sub-mode of ITD device-group CLI Used for health monitoring of nodes N7k(config)# itd device-group WEBSERVERS N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# probe icmp
39
ITD: Creating ITD Service
ITD service attributes: device-group Associate Device Group with service ingress interface Specify list of ingress interfaces load-balance Select Load distribution method virtual Configuring virtual IP N7k(config)# itd <service-name> ? device-group ITD device group failaction ITD failaction ingress ITD Ingress interface load-balance ITD Loadbalance scheme peer Peer for sandwich mode virtual ITD virtual ip configuration vrf ITD service vrf nat Network Address Translation N7k(config-itd)# load-balance method ? dst Destination based parameters src Source based parameters N7k(config-itd)# load-balance method src ? ip IP ip-l4port IP and L4 port N7k(config-itd)# virtual ip ? advertise Advertise tcp TCP Protocol udp UDP Protocol
40
ITD: Configuring a Service
Command Syntax: [no] itd <service-name> Executed in CLI config mode Creates/Deletes ITD service N7k(config)# itd WebTraffic
41
ITD: Configuring Ingress Interface
Command Syntax: [no] ingress interface <interface 1>, <interface 2>, <interface range> Executed in CLI config mode Executed as sub-mode of ITD service CLI Specify list of ingress interfaces for ITD service N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10
42
ITD: Associating Device Group
Command Syntax: [no] device-group <device group name> Executed in CLI config mode Executed as sub-mode of ITD service CLI Specify Device Group to associate with ITD service N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS
43
ITD: Configuring Loadbalance method
Command Syntax: [no] load-balance method [src | dst ] [ip | ip-l4port [tcp | udp] range start end]] Executed in CLI config mode Executed as sub-mode of ITD service CLI Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance method src ip
44
ITD: Configuring Loadbalance buckets
Command Syntax: [no] load-balance method [src | dst] buckets <bucket> mask-position <mask> Executed in CLI config mode Executed as sub-mode of ITD service CLI Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance buckets 16
45
Loadbalance Bucket Load balance bucket option provides user to specify the number of ACLs created per service. The bucket value must be configured in powers of 2. When buckets are configured more than the configured Active nodes, the buckets are applied in Round Robin. Bucket configuration is optional, by default the value is computed based on the number of configured nodes.
46
ITD: Configuring Loadbalance mask-position
Command Syntax: [no] load-balance mask-position <mask> Executed in CLI config mode Executed as sub-mode of ITD service CLI Specify Loadbalancing method N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# load-balance mask-position 8
47
ITD: Configuring VIP Command Syntax: [no] virtual [ip | ipv6] <ip-address> [<net mask> | <prefix>] [ip | tcp <port-num> | udp <port-num> ] [advertise enable| disable] Executed in CLI config mode Executed as sub-mode of ITD service CLI Used to host VIP on N7k N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip
48
ITD: Configuring VIP with advertise
Command Syntax: [no] virtual [ip | ipv6] <ip-address> [<net mask> | <prefix>] [ip | tcp <port-num> | udp <port-num> ] [advertise enable| disable] Executed in CLI config mode Executed as sub-mode of ITD service CLI Used to host VIP on N7k, with advertise enable Advertise enable is RHI for ITD, creates static routes for the configured VIP The static routes can be redistributed, based on user configured routing protocol. N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip advertise enable
49
ITD: Configuring VIP with NAT
Command Syntax: [no] nat destination Executed in CLI config mode Executed as sub-mode of ITD service CLI Used to translate destination-IP to VIP N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# loadbalance method src-ip N7k(config-itd)# virtual ip advertise enable N7k(config-itd)# nat destination
50
ITD: Configuring failaction node reassign
Command Syntax: [no] failaction node reassign Executed in CLI config mode Executed as sub-mode of ITD service CLI Used to reassign traffic to an Active node, on a node failure ITD probe configuration is mandatory, also supported only for IPv4 addresses. Once the failed node comes back, the recovered node starts getting traffic N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e4/1-10 N7k(config-itd)# device-group WEBSERVERS N7k(config-itd)# failaction node reassign
51
Failaction node reassign contd.
Failaction reassign with Standby When the node goes down/probe failed, the traffic would be reassigned to the first available Active node. When the node comes up/probe success from failed state, the node that came up will start handling the connections. If all the nodes are down, the packets will be get routed automatically. Failaction reassign without Standby When the node goes down/probe failed, and if there is a working Standby node traffic is directed to the first available Standby node. When all nodes are down, including the Standby node. The traffic will be reassigned to the first Available Active Nodes.
52
No Failaction reassign
With Probe ITD probe can detect the node failure or service reachability and brings down the node. When the Node is failed, and Standby is configured. The standby node will take over the connections. Node is failed and there is no Standby configuration. On failure, the traffic would get routed and does not get reassigned, as failaction is not configured. Once the Node recovers, and the recovered node starts handling the traffic. Without probe Without probe configuration, ITD cannot detect the node failure. When the Node is down, ITD does not reassign or redirect the traffic to a different Active node
53
ITD : failaction node reassign
Failaction mode: Bypass(default) Or Reassign Probe configured (Y/N) Standby configured (Y/N) Behavior on node failure Behavior on both node and Standby failure Bypass N Traffic gets routed Y Redirected to Standby Reassign Redirected to first available Active node. Note: When failed node comes back, resumes redirecting to the node.
54
ITD: Configure a Service
N7k-1 Configuration N7k-1(config)# feature itd N7k-1(config)# device-group FW-INSPECT N7k-1(config-device-group)# node ip N7k-1(config-device-group)# node ip N7k-1(config-device-group)# probe icmp N7k-1(config)# itd WebTraffic N7k-1(config-itd)# ingress interface e3/1 N7k-1(config-itd)# device-group FW-INSPECT N7k-1(config-itd) load-balance method src ip N7k-1(config-itd)# no shut ITD Service ITD Service e 3/1 e 3/2 N7k-1 N7k-2 N7k-2 Configuration Configuration Steps: N7k-2(config)# feature itd N7k-2(config)# device-group FW-INSPECT N7k-2(config-device-group)# node ip N7k-2(config-device-group)# node ip N7k-2(config-device-group)# probe icmp N7k-2(config-itd)# itd WebTraffic N7k-2(config-itd)# ingress interface e3/2 N7k-2(config-itd)# device-group FW-INSPECT N7k-2(config-itd)# load-balance method dst ip N7k-2(config-itd)# no shut Enable ITD feature on both N7k Configure a Device Group Configure an ITD Service Configure Service Name Specify Ingress Interface Associate Device Group Specify Load Distribution Scheme Activate ITD Service DONE
55
ITD: Complete Service Configuration
N7k-1(config)# feature itd N7k-1(config)# device-group FW-INSPECT N7k-1(config-device-group)# node ip N7k-1(config-device-group)# node ip N7k-1(config-device-group)# probe icmp N7k-1(config)# itd WebTraffic N7k-1(config-itd)# ingress interface e3/1 N7k-1(config-itd)# device-group FW-INSPECT N7k-1(config-itd) load-balance method src ip N7k-1(config-itd)# no shut N7k-2(config)# feature itd N7k-2(config)# device-group FW-INSPECT N7k-2(config-device-group)# node ip N7k-2(config-device-group)# node ip N7k-2(config-device-group)# probe icmp N7k-2(config-itd)# itd WebTraffic N7k-2(config-itd)# ingress interface e3/2 N7k-2(config-itd)# device-group FW-INSPECT N7k-2(config-itd)# load-balance method dst ip N7k-2(config-itd)# no shut ITD Service ITD Service e 3/1 e 3/2 N7k-1 N7k-2
56
ITD: RACL + ITD Loadbalancing Configuration
N7K Configuration N7k(config)# ip access-list test N7k(config-acl)# permit ip / /16 N7k(config-acl)# permit ip / /32 N7k(config-acl)# end N7k(config)# int e3/1 N7k(config-if)# ip access-group test in N7k(config-if)# end N7k(config)# feature itd N7k(config)# itd device-group FW-INSPECT N7k(config-device-group)# node ip N7K(config-device-group)# node ip N7k(config-device-group)# probe icmp N7k(config-device-group)# end N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1 N7k(config-itd)# device-group FW-INSPECT N7k(config-itd)# no shut 3 simple steps to configure RACL + ITD Configure Access list and apply on ingress interface Configure Device group Create ITD service Show run interface Cisco Confidential
57
ITD: VIP Service Configuration
N7k(config)# feature itd N7k(config)# device-group WEB-SERVERS N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# node ip N7k(config-device-group)# probe icmp N7k(config)# itd WebTraffic N7k(config-itd)# ingress interface e3/1, e3/2 N7k(config-itd)# device-group WEB-SERVERS N7k(config-itd)# virtual N7k(config-itd)# no shut ITD e 3/1 e 3/2 Loadbalancing VIP:
58
DCNM : ITD Template support
ITD is supported in DCNM as a template
59
DCNM : Example ITD configuration
60
DCNM : Generated ITD configuration
61
Additional Information
Mailing Lists CDETS Project: CSC.datacenter Product: n7k-platform Component: itd Config guide: os/itd/configuration/guide/b-Cisco-Nexus-7000-Series-Intelligent- Traffic-Director-Configuration-Guide-Release-6x.html Command reference: os/itd/command/reference/n7k_itd_cmds/itd_cmds.html
62
Case Study 1: ITD Clustering with Load-balancers
Web Server Clients ITD service e3/1 VLAN 2000 IXIA Cisco UCS vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch VLAN 2000 Cisco Confidential
63
Case Study 2: ITD Clustering with WAF appliances
Web Server Clients ITD service e3/1 VLAN 2000 IXIA Cisco UCS vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch vNIC / vSwitch VLAN 2000 Cisco Confidential
65
Case-study 3 : VDS-TC-16B Network design (Blade Type)
1x Analytics 4x 10GE (Twinax 3m ) VDS-TC-16B cluster #1 UCS C240 Internet Nexus 7706 Nexus 2248TP 5 x 10GE UCS 6248 FI IOM Cache 4 x 8 x 10GE (Twinax 3m) B200 x 8 10x 1GE 4x40GE Uplinks VDC#1 IOM 5x IBM Storage DS3524 Cache 16 x 2 x 10GE (Twinax 3m) 4 x 40GE UCS 6248 FI IOM Cache 10x 1GE B200 x 8 5 x 10GE 4x40GE Uplinks IOM Cache Nexus 2248TP Cache Mgr UCS C220 4x 10GE (Twinax 3m ) 4x 10GE (Twinax 3m ) Distribution VDC VDS-TC-16B cluster #2 Nexus 2248TP 4 x 40GE 5 x 10GE UCS 6248 FI IOM Cache 4 x 8 x 10GE (Twinax 3m) B200 x 8 10x 1GE IOM 5x IBM Storage DS3524 Cache 16 x 2 x 10GE (Twinax 3m) VDC#2 UCS 6248 FI IOM 10x 1GE Cache Client B200 x 8 5 x 10GE IOM Cache Nexus 2248TP Cache Mgr UCS C220 4x 10GE (Twinax 3m ) 4x 10GE (Twinax 3m )
66
ITD comparison with Port-channel, ECMP, PBR
Feature/Benefit Port Channel ECMP PBR ITD Link Failure detection ✓ Appliance/server failure detection ✗ Weighted load-balancing NAT ✓(soon) VIP, advertisement Auto re-configuration of N7k (s) in case of failures Hot standby support – N+M redundancy Resilient: Non-Disruptive to existing flows Quick failure detection/convergence Max # of nodes for scaling 16 256 Ease of configuration, troubleshooting Deployment complexity (complex) (simple) Avoid Traffic Black-holing in Sandwich Mode Topology Adaptive flow distribution, auto-sync for bi-directional flow coherency post 6.2(10) Cisco Confidential
67
ITD comparison with WCCP
Feature/Benefit N7k WCCP N7k ITD Appliance is unaware of the protocol No Yes Protocol support IPv4 IPv4, IPv6 Number of TCAM entries (say, 100 SVI, 8 nodes, 20 ACEs) Very High 16000 Very low 160 Weighted load-balancing User can specify which bits to use for load-balancing Number of nodes 32 256 Support for IPSLA probes Support for Virtual IP Support for L4-port load-balancing Capability to choose src or dest IP for load-balancing Customer support needs to look at switch only, or both the switch and appliance Both Switch only Adaptive flow distribution Yes (post 6.2.8) Sup CPU Overhead High None Egress ACL
68
ITD comparison with Traditional Load-balancer
Feature/Benefit Traditional L4 load-balancer ITD Number of moving parts External appliance needed No appliance or service module needed Hardware Typically Network processor based ASIC based 10G Server migration Doesn’t scale Scales well Bandwidth ~100 Gb ~10 Tb User can specify which bits to use for load-balancing Typically No Yes ACL + VIP + Redirection + LB Performance Degradation Line-rate Customer support needs to look at switch only, or both the switch and appliance Both Switch only Wiring, Power, Rackspace, Cost Extra Not needed
69
ITD Benefits Summary Feature/Benefit Manual Config SDN ITD
Link Failure detection ✓ Appliance failure detection ✗ Adaptive flow distribution Auto re-configuration of N7k (s) Hot standby support – N+M redundancy Non-Disruption of existing flows Works without an external device/controller Quick failure detection/convergence (slowest) (slow) (Faster) Introduces additional point of failure (besides N7k/appliance) (controller) Max #of nodes for scaling 8/16 No limit Ease of troubleshooting Deployment complexity (complex) (simple) Automatic handling of route changes Error reporting (Not granular) (granular)
70
Show CLI: “show itd” switch# sh itd Name Probe LB Scheme Status Buckets WEB ICMP src-ip ACTIVE 2 Device Group VRF-Name WEB-SERVERS Pool Interface Status Track_id WEB_itd_pool Eth3/3 UP 3 Virtual IP Netmask/Prefix Protocol Port / IP 0 Node IP Config-State Weight Status Track_id Sla_id Active 1 OK Bucket List WEB_itd_vip_1_bucket_ Active 1 OK WEB_itd_vip_1_bucket_2
71
Show CLI: “show itd statistics”
switch# sh itd WAF statistics Service Device Group VIP/mask #Packets WAF WAF / (100.00%) Traffic Bucket Assigned to Mode Original Node #Packets WAF_itd_vip_1_bucket_ Redirect (49.73%) WAF_itd_vip_1_bucket_ Redirect (50.27%)
72
Show CLI for IPv6: “show itd”
switch(config)# show itd Name Probe LB Scheme Status Buckets WEB-SERVERS N/A src-ip ACTIVE 8 Device Group IPV6_SERVER_FARM Pool Interface Status Track_id WEB-SERVERS_itd_pool Eth6/13 UP 9 Node IP Config-State Status Track_id Sla_id :100::100:100 Active OK None None Bucket List WEB-SERVERS_itd_bucket_1 WEB-SERVERS_itd_bucket_ :200::200:200 Active OK None None WEB-SERVERS_itd_bucket_2 WEB-SERVERS_itd_bucket_ :300::300:300 Active OK None None WEB-SERVERS_itd_bucket_3 WEB-SERVERS_itd_bucket_ :500::500:500 Active OK None None WEB-SERVERS_itd_bucket_4 WEB-SERVERS_itd_bucket_8
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.