Download presentation
Presentation is loading. Please wait.
1
NetScaler SDX
2
Agenda Why consolidation? Multi-tenant solutions SDX Overview
Hardware Internals Performance Service VM Consolidation across security zones Licensing
3
Why consolidation?
4
A LOT of Different Applications
Any enterprise would have some combination of these or similar applications … and maybe even more ….. Applications for employees Applications for customers Applications for partners
5
Applications Have Different Owners
Desktop Admin Finance Commerce Collaboration Manufacturing Commerce Sales/ Service Administration Sales/ Service Finance LoB Specialists LoB Specialists LoB Specialists LoB Specialists Sales/ Service Collaboration Network Comms Manufacturing Commerce Commerce And these applications have … Different owners Different users Different functional requirements Different performance requirements Different security requirements Different compliance requirements
6
Each Application Has it’s Own Needs
Throughput Functionality Policies Service Levels First, obviously different applications have different throughput requirements. Not only do they have differences around bandwidth, requests/sec., and packets per second and other associated performance metrics, but they can also differ in terms of the variability of their throughput requirements. Some may be fairly steady state. Some may have extreme seasonality. Some may be subject to month-end or quarter- end close spikes. Some may vary widely even within a day. Second, they can require very different functionality from the ADCs. Some may only require load balancing. Our commerce applications may require application firewall. Extranet applications may require advanced authentication. Those apps that came via acquisition make heavy use of URL rewrite. Some applications may be taking advantage of our new DataStream DBMS load balancing functionality. These difference in functionality can drive demand for very different policies for different apps. And in some cases, as mentioned before, we may want to let the application SMEs – since they know the apps – create/administer some of these policies. And all of this leads to different applications having different service level requirements. Around things like -performance -change management windows -unplanned downtime -security
7
Each Application Has it’s Own Lifecycle
Maintenance windows Infrastructure change frequency Application change frequency Desire for new ADC functionality Which brings us to lifecycle in general. -Apps have different uses. -Apps have different owners. Which means they have very different lifecycles. So they have very different requirements in terms of -planned maintenance windows. When can they be offline for maintenance -They can have very different requirements for changes in their underlying infrastructure. Some may go years w/out requiring new hardware or OS updates. Others may need new infrastructure or OS or database updates very frequently. -The apps themselves may change at different rates. Some may only get updated once a year. Others may be updated more than once a month. These app changes can thus drive needed changes to ADC configs, since in many cases the policies on the ADC are driven by app functionality. -And the apps may have different demands for new ADC functionality. For example, a “big data” app may be chomping at the bit for the NetScaler DataStream functionality. However, the SAP guys may not be interested, and moreover don’t want to “risk” any part of the infrastructure that supports their app getting upgraded until sometime next year.
8
Network Itself Can Drive Further Sprawl
Segmentation driven by compliance Hierarchical network topologies But even if we’re willing accept that lowest common denominator approach, it’s not that simple. The Network itself can drive appliance sprawl. First, our networks – largely due to compliance – can be highly segmented. And generally, ADC management and data plane isolation hasn’t been strong enough to have a single device span segments. And the hierarchical “north south” network designs have in many cases prevented – or made impractical – using a few devices for a lot of apps. Since different apps were in different parts of the network.
9
Different Apps and Networks
External DMZ Internal DMZ Internal Lab So, we have multiple apps with different requirements, on different network security zones.
10
SLAs after Consolidation: Two Issues/Questions
Capacity How much performance can a single instance offer? Isolation Can I get the needed data and management plane isolation? As we look at consolidation for ADCs (or most other things as well), we face two key questions: Capacity: If I go from separate boxes to a single box, will I still have enough aggregate capacity? Isolation: Can I really prevent the “noisy neighbor” problems associated with “multi-tenancy” both in terms of resource consumption and lifecycle independence.
11
Multi-tenant solutions
12
Multi-tenant ADC Partition 1 Partition 2 Partition 3 Partition 4 ADC
All tenants share a single entity Rate limits, RBA and ACLs partition the instance Partitions are NOT fully isolated No CPU/Resource isolation No version independence No life cycle independence No HA independence Partition 1 Partition 2 Partition 3 Partition 4 ADC The traditional approach for multi-tenancy is to use purpose-built hardware with software features like rate limits, ACLs, RBAs to create a logical partition or contexts. In this solution there is a single entity of the device/OS/application. It looks good, but there are lots of problem with this solution. Specifically: There is no CPU and resource isolation – one partition can greatly impact the performance of other partitions. There is no version independence – all the tenants are forced to use same version of software. There is no life cycle independence – if the software has a bug impacting one of the tenants, other tenant gets impacted too. There is no High Availability (HA) independence – we can not failover a single partition. If failover has to happen, all partitions have to failover.
13
Multi-tenancy - Virtual ADCs
Hypervisors Each tenant gets a virtual ADC Brick-wall partitioning between tenants Good isolation Performance doesn’t scale Virtual ADC Virtual ADC Virtual ADC Virtual ADC Hypervisor Hardware Hypervisors became very common now and public cloud providers use hypervisors like Xen to provide multi-tenant solutions The hypervisors are now enterprise class and provide stable environments for multi-tenancy. In a hypervisor based solution – hypervisor is installed on generic hardware or specialized hardware and ADCs are run as Virtual Machines(VM) for each tenant The hypervisors provide brick-wall like partitioning across tenants. In this solution, VMs will get resource isolation/version & life cycle independence. NetScaler VPX is a solution that can be deployed as a VM. One problem with the hypervisor based solution is that network performance doesn’t scale. A device capable of processing 50gbps traffic natively, will not be able to process 50gbps with virtualization.
14
Packet Flow Hypervisor vSwitch Virtual ADC RX 1 4 3 2 TX
NIC receives a packet vSwitch forwards the packet to the destination ADC ADC processes the packet 1 4 3 2 TX ADC transmits a packet vSwitch receives the packet vSwitch transmits the packet on NIC Virtual ADC To understand the issue, lets take a look at the packet flow. In the Hypervisor based solution, only the hypervisor has direct access to the hardware. When a NIC receives a packet, the hypervisor receives the packet and switches the packets to the right VM – by copying the packet. The guest VM receives the packet. Similarly when a packet is transmitted by the VM, Hypervisor receives it and forward the packet on the NIC. This switching in the hypervisors becomes a bottleneck under heavy network loads as the hypervisors are not designed for specialized network processing. Hypervisors are built for general purpose computing.
15
Two Options Compared Lowest Common Appliance Sprawl Denominator
Device per App VIP per App Resource Isolation High Low Lifecycle isolation Delegated admin safety Efficiency CAPEX/OPEX Appliance Sprawl Lowest Common Denominator Comparing these approaches is fairly straightforward. We can look at the extent each provides for: Resource isolation: our ability to prevent one app from overrunning another app Lifecycle isolation: can we manage the configs for one app independent of configs for another app Delegated admin safety: could we safely let an application SME manage parts of his apps ADC config w/out putting our network at risk Efficiency: how well are we preserving space, power and cooling CAPEX/OPEX: how much does each approach cost Device per app is great for the first three, since each app gets its own ADC, but it’s pretty expensive, and not very efficient. VIP per App is great from an efficiency and CAPEX/OPEX perspective, but is really limited from an isolation perspective. So the net effect is that we faced with a choice between appliance sprawl, or a “lowest common denominator” solution that results from the various compromises associated with putting apps with different requirements on the same ADC device.
16
NetScaler SDX NetScaler Hardware XenServer ServiceVM NetScaler VPXs
Intel Processors SR-IOV capable NICs XenServer CPU Virtualization IO Virtualization ServiceVM Management console NetScaler VPXs ServiceVM NetScaler VPX NetScaler VPX NetScaler VPX XenServer NetScaler Hardware For NetScaler SDX, we use the same hardware that NetScaler MPX uses for high performance networking. We use XenServer for virtualization. The hardware and XenServer Hypervisor support SR-IOV. So hypervisor is no longer a performance bottleneck in the SDX. Also we have a management service running on the SDX for management of the SDX – provides services like creation/modification/deletion of VPXs. ServiceVM provides services similar to the services provided by XenCenter for XenServer hosts. You can automate many of the management tasks by using NITRO API provided by the ServiceVM. Multiple NetScaler VPXs can be provisioned on the SDX to provide multi-tenant solution. NetScaler VPX & NetScaler MPX uses the same software – so NetScaler VPX is as rock solid as NetScaler MPX.
17
SDX Overview
18
New NetScaler SDX Multi-tenancy Approach
Complete instance per tenant Memory, CPU isolation Separate entity spaces Version independence Lifecycle independence Completely isolated networks Single license per appliance provides system throughput limits and max number of instances Meets performance and scalability requirements Up to 40 Tenant Instances
19
Multiple Instances In A Single Platform
Complete isolation Complete independence Segmentation w/in instances
20
Traditional Multi-tenancy Approach
Single administrator controls much of configuration All tenants share a single resource Traffic domains for network segmentation Rate limiting for resource isolation RBA/roles for management isolation Shared entity space Partitions NOT fully isolated No CPU, memory isolation No version independence No maintenance independence No per-tenant High Availability Common Global Instance Tenant Partitions Market forces driving sophisticated multi-tenancy are: New network architectures – consolidated and flattened architectures drive need for more powerful ADCs with multiple instances of VPX New data center architectures – consolidation of data center and supporting multiple clients in less rack space IT shared services models – relationship between IT organization and the business it supports, relying on shared infrastructure while maintaining SLAs for each business
21
IO Virtualization Switch PCI SR-IOV, Intel VT-d
Physical Function (PF) and Virtual Function (VF) Assign VF to a VM IOMMU Efficient sharing of resources NICs support SR-IOV VF0 VF1 RX Queue TX Queue RX Queue TX Queue MAC & VLAN Filters MAC & VLAN Filters Switch NIC SR-IOV is a PCI standard that provides IO virtualization. With IO virtualization a physical device/function like NIC can be carved into virtual devices/functions. The virtual functions can be assigned to virtual machines. The virtual machine will have direct access to hardware via virtual function. IOMMU translates the guests physical addresses to host physical addresses With IO virtualization VMs can efficiently share the IO devices. Latest NICs like Intel and Intel controllers support SR-IOV
22
IO Virtualization - NIC
VF RX and TX queues MAC addresses VLAN Filters RX MAC filtering – phase 1 VLAN filtering – phase 2 Queue the packet if both are passed TX NIC fetches the packet directly from TX queue and transmits it No Hypervisor Involvement VF0 VF1 RX Queue TX Queue RX Queue TX Queue MAC & VLAN Filters MAC & VLAN Filters Switch NIC With IO virtualization, each VF gets its own hardware RX & TX queues and has direct access to the Hardware. MAC and VLAN filters are associated with each VF. When the NIC receives a packet, two levels of filtering is applied. In the first phase MAC filtering is applied to the find the right VF based on the destination MAC address. Then VLAN filtering is applied later to the packet. A packet is queued to a VF only if both MAC & VLAN filters pass. When a VF transmits a packet, it queues the packet in the TX queue and the HW fetches the packet for actual transmission. There is no hypervisor involvement in the data path. Packet switching is done at the hardware level resulting in higher network performance. Hardware provides MAC and VLAN filtering capabilities to isolate the traffic across VMs. Using IO virtualization technologies, we can get needed isolation without sacrificing the performance.
23
SDX: Multi-tenant NetScaler Appliance
ServiceVM Management plane for the entire device NetScaler 2 NetScaler 1 NetScaler 3 Multiple management networks Instances are separate VMs vSwitch Data plane uses SR-IOV Virtualization layer 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 Best place to start is with base architecture. NOTES TO SPEAKER: Work the build and walk the audience through the base architecture. Things to emphasize are: There is an underlying virtualization layer, that provides the core abstraction. Your call as to whether to call out that it is XenServer. Point out that the device uses the vSwitch for the management plane network, and that the management plane network can actually be multiple networks. Important for compliance/consolidation across security zones Also point out that the data plane DOES NOT go through the vSwitch. This is important for both scalability/performance, multi-tenancy and for isolation.
24
Dedicated/Shared Resources
Resource Isolation Dedicated Resources Memory & SSL Dedicated/Shared Resources CPU & Network On NetScaler SDX, instances get dedicated/shared resources. The memory resources are dedicated to an instance. Similarly the SSL devices assigned to the VPX instance are dedicated. A VPX can be assigned zero or more SSL devices. The CPU resources can be dedicated or shared depending on the requirements. Each instance can get upto 5 dedicated cores(10 hyper-threads). The dedicated CPU allocation can be useful for instances running production traffic. For the instances that are created for testing or training purposes, shared CPU resource allocation can be used. Allocation of the network devices is flexible in NetScaler SDX. The devices can be shared or dedicated based on the security/compliance requirements. Finally throughput and packets per second rate limits can be imposed on the VPX instance to control the network usage of an instance.
25
Fine grained CPU allocation
VPX 1 CPU 1 Core 1 Core 3 Core 5 Core 7 Core 9 Core 11 Core 2 Core 4 Core 6 Core 8 Core 10 Core 12 CPU 2 VPX 2 VPX 3, 4 Core 13 Core 15 Core 17 Core 19 Core 21 Core 23 Core 14 Core 16 Core 18 Core 20 Core 22 Core 24 NetScaler SDX allows fine grained control over the allocation of the CPU resource to an instance. At present SDX have 2 six core processors. Enabling hyper-threading results in 12 logical cores per CPU and a total of 24 logical cores per system. in the above picture, CPU cores 3-8 are dedicated to VPX1. CPU cores are dedicated to VPX2. CPU cores are shared by VPX3 and VPX4
26
Resource Isolation RAM is a hard allocation – no sharing
SSL offload is a hard allocation – no sharing Data plane CPU can be a hard allocation Which brings us to isolation. Let’s start with resource isolation. Each instance gets its own dedicated RAM allocation. This memory is not shared. This means you don’t need to worry about setting individual connection table size limits to prevent “noisy neighbor” issues. Each instance gets its own memory, and is free to use the memory as it sees fit. Each instance can also be allocated one or more of the Cavium SSL chips (16 total on Corinth, 36 on Galata). Again these chips aren’t shared. <Note: may want to refer back to SSL example on slide 7 to reinforce this. Also, F5 Viprion vCMP DOES NOT have hard isolation of SSL, so it can’t protect against the issue discussed on slide 7>. The data plane CPU for each instance can also be a hard allocation. However, at a certain instance count (11 or more) some of the instances will need to share cores.
27
Lifecycle Management Isolation
Version management is done at instance level HA is done at the instance level Next, lets talk about lifecycle isolation. First, each instance has it’s own NetScaler OS kernel, and these kernels can be upgraded independently. So, for example, when the next version of NetScaler OS becomes available, some of the instances can be upgraded, while others can be left at NetScaler 9.3. This gives us the flexibility to consolidate and still meet the individual requirements of different apps. Second, HA is also done at the instance level.
28
Instance High Availability
In an HA pair, we can fail over an individual instance on device A to device B, without having to flop the entire device and every instance on the device. Embedded within this is the ability to have active instance on both devices. On SDX we have: -ability to upgrade an instance w/out upgrading the entire device -ability to fail an instance over w/out failing over the entire device
29
Network Isolation Each instance is it’s own kernel, so
Gets its own connection tables Gets its own routing tables Gets its own IP stack Strong isolation of data traffic on data plane Strong isolation of management traffic on management plane Which brings us to network isolation. As discussed earlier, each instance gets its own kernel. So it has it’s own IP stack, it’s own routing tables, VLANs (more on that later), connection tables and so on. For the data plane, our use of SR-IOV provides very strong isolation here. And we have a lot of flexibility for how we can isolate on the management plane as well.
30
Data and Management Plane Isolation Summary
Ability to have multiple management networks Separate network for ServiceVM and NSIPs Separate networks for different NSIPs Very strong data plane isolation options Dedicate interfaces to instances Share interfaces with VLAN filtering NOTE TO SPEAKERS: -bullets are fairly self-explanatory. Here you may want to explain a bit about what we mean by “NSIP” -Also, this deck simplifies things by not getting into the subtleties of VLAN filtering at the physical interface (and thus the limit of either 23 or 63 VLANs per physical interface) vs. disabling VLAN filtering at the interface and thus being able to pass all 4096 VLANs to the instances. Rather, it just discusses using either dedicated interfaces or sharing interfaces using VLAN filtering.
31
NetScaler SDX: Platform for Evolution
Throughput Function Density 2000 2005/06 NS-S NS-E NS-P 20 Gbps 35 Gbps 50 Gbps 5 20 40 2009 Early NetScaler Editions Pay/Grow SDX Late SDX+ 2012/13 AWS, more soon.. Vendor Integration
32
System At-A-Glance SDX-17500 SDX-19500 SDX-21500
Per System Service VM OS Up to 16 instances 20Gbps throughput 16 SSL cores VPX uses software RSS only SDX-19500 35Gbps throughput SDX-21500 50Gbps throughput Each VPX requires 2GB RAM x 20 VMs + Service VM memory + XS memory = maximum amount of memory on the SDX
33
System At-A-Glance SDX-11500 SDX-13500 SDX-14500 SDX-16500 SDX-18500
Per System Service VM OS 16 instances 8Gbps throughput VPX uses software RSS only SDX-13500 12Gbps throughput SDX-14500 16Gbps throughput SDX-16500 20Gbps throughput SDX-18500 30Gbps throughput
34
System At-A-Glance SDX-17550 SDX-19550 SDX-20550 SDX-21550 Per System
Service VM OS 40 instances 20Gbps throughput VPX uses software RSS only SDX-19550 30Gbps throughput SDX-20550 40Gbps throughput SDX-21550 50Gbps throughput
35
Hardware Specifications
36
SDX 17500/19500/ Front (Fiber) 10/1 10/2 10/3 10/4 10/5 10/6 10/7 10/8 0/1 0/2 8 x 10GE SFP+ interfaces (fiber) (short & long reach) 2 x 10/100/1000Mb management interfaces
37
LCD Keypad Increments the digit under the cursor. Configure subnet mask, NSIP, and default gateway in that order (IPv4 only) Verifies default gateway address is in NSIP subnet Removes dependency on console port Quick and easily accessible Moves the cursor one digit to the left. Moves the cursor one digit to the right. ENTER key is used to save, exit or cancel Decrements the digit under the cursor. Note: ns.conf must contain the default IP/mask set ns config -IPAddress netmask When you first install the appliance, you can configure the initial settings by using the LCD keypad on the front panel of the appliance. The keypad interacts with the LCD display module, which is also on the front panel of these appliances. The configuration file (ns.conf) must contain the following command and default values. set ns config -IPAddress netmask The functions of the different keys are explained in the following table. You are prompted to enter the subnet mask, NetScaler IP address (NSIP), and gateway in that order respectively. The subnet mask is associated with both the NSIP and default gateway IP address. The NSIP is the IPv4 address of the NetScaler appliance. The default gateway is the IPv4 address for the router, which will handle external IP traffic that the NetScaler cannot otherwise route. The NSIP and the default gateway should be on the same subnet.
38
SDX 17500/19500/21500 - Back (non-functional) Power Supply 1
500GB Hard Drive 48GB Memory / 160 GB SSD (replacement for compact flash) USB port is reserved for future release Dual AC power supplies; second serves as backup 160GB SSD is used for storing and booting the OS and storing the configuration (de facto on other network products). It is mounted as /flash. It is read-only. It is solid state, so less chance of it breaking than a hard disk with moveable parts, therefore more reliable than disks. Back in the days of 7.0 and before, the hard disk was not necessary for the packet engine to run. We would not log anything, but the appliance would still function in the event of a hard disk failure. The reason why the NS still functioned is because the Master Boot Record and kernel all live on the CF. Hard drive used for logs, core dumps, etc.
39
Hardware Internals
40
NetScaler VPX I/O Limitation on XenServer
Domain0 Bridge Drivers NetScaler VPX VF Driver PV Driver NetScaler traffic goes through dom0 XenServer dom0 does actual RX/TX over the physical NIC XenServer becomes a bottleneck (<3Gbps)* * With Release 9.2.nc 40
41
NetScaler SDX with SR-IOV
(Single Root I/O Virtualization) XenServer Domain0 Drivers NetScaler VPX VF Driver Virtual machines can communicate directly with virtual NICs (bypassing dom0) SDX can achieve near native performance VPX VMs tend to be network I/O bound rather than memory or CPU bound, making VPX VMs ideal candidates to take advantage of SR-IOV. SR-IOV limitations do not allow XenServer HA (virtual machine failover) nor XenMotion; fortunately, these are not significant limitations in VPX deployments which implements their own HA mechanisms. 41
42
NetScaler SDX with SR-IOV
(Single Root I/O Virtualization) XenServer Domain0 Drivers NetScaler VPX VF Driver Hardware I/O virtualization for networking Requires SRIOV-enabled VF Intel NICs MPX 17500/19500/21500 MPX – (Corinth) MPX – (Galata) Up to 64 virtual NICs which can be assigned to virtual machines Today, performance of SDX is equivalent to performance of MPX 17500 64 VFs 50Gbps per SDX Today, Intel NICs don’t support hardware RSS, therefore max of 16Gbps per instance. Today, software RSS does not support SR-IOV, so adding more CPUs to a VPX instance will not improve performance. Each VPX instance has a dedicated VF, therefore performance is not impacted by other VPX instances. Up to 16Gbps per VM 42
43
SR-IOV Advantages SR-IOV is a PCI device virtualization technology that allows a single PCI device to appear as multiple PCI devices on the physical PCI bus: the real physical device is called physical function (PF) while the others are called virtual functions (VF). The XenServer hypervisor can directly assign one or more of these VFs to a virtual machine using Intel VT-D technology The guest can use the VF as any other directly assigned PCI device. Assigning one or more VFs to a virtual machine allows the virtual machine to directly exploit the hardware without any mediation by the hypervisor. This means better performance and scalability because it has very little or no impact on dom0.
44
SR-IOV Disadvantages VFs have no relationship with VIFs and bridges in dom0 VFs must be configured separately and independently by the Administrator. A VM loses all the assigned VFs after being migrated to a different host. Using XenMotion or High Availability (HA) failover requires manual reinstatement of VFs on the new host. The limitations on XenServer virtual machine failover and XenMotion are not significant in a NS VPX deployment which can implement its own HA and load balancing mechanisms.
45
Intel 82599 NIC Virtual Function Driver Capabilities
Supported Not Supported Link up/down status for HA environments Tagged VLANs IPv6 on VPX instances Manual Link Aggregation Speed/duplex/ flow control LACP IPv6 (on Service VM) Hardware Request Side Steering (RSS) All instances must be configured similarly; share a physical interface Today, Intel NICs don’t support hardware RSS, therefore max of 16Gbps per instance Today, software RSS does not support SR-IOV, so adding more CPUs to a VPX instance will not improve performance
46
VF Driver Provides Network Isolation w/VLANs
Full instance isolation Separate routing domain Independent routing, IP stack Independent connection table, ACLs, etc. Per instance network isolation Traffic is sent to intended instance Isolation enforced at the NIC Each VPX instance has dedicated VF, therefore performance is not impacted by other VPX instances.
47
Supported Configurations
48
Active systems can exist on both devices
SDX High Availability Active systems can exist on both devices Instance-level HA Stateful connection failover HA per-instance within appliance HA per-instance b/w appliances Limitations are: Identical VLAN IDs between HA pairs Requires –trunk option to tag the heartbeat packets with the VLAN ID VMAC not supported Active/Active capability targeted for future release It is possible to create a high availability pair between the two NetScaler appliances existing in different subnets, you must enable the INC mode. Trunk: This means the port is in the trunking mode with no native VLAN support, which means that all the VLANS are tagged including the native VLAN. This option was made available for compatibility with some force10 switches. “Trunk” in a NetScaler equals “Tag native VLAN” in a Cisco. If you enable the trunk option on the NetScaler appliance, and the switch does not tag and allow the tagged frames on the Native VLAN, then the high availability communication, such as heartbeat packet exchange, configuration synchronization, and command propagation between the NetScaler appliances, is blocked on the interface connected to the switch. This
49
Link Aggregation Configurations
Link aggregation will work across like speed ports (10GE) Link aggregation only works with ports of same type (fiber with fiber) Supports up to 4 channels per system Supports up to 8 ports per channel No LACP
50
Service VM
51
Service VM Overview Service VM is pre-provisioned FreeBSD 64-bit VM
Service VM manages the whole appliance (XenServer is not exposed) Management Interfaces GUI (HTTP/S) & API (similar to NITRO in NetScaler)
52
Device Management Show System Information
Number of CPU cores Available/free memory Version details Scheduling of data backup and pruning Backup of database and configuration files (last 5 versions) Pruning of database (to keep data size in control) VPX Instance inventory (with system details) Device level stats (CPU, Memory, Stats)
53
Device Management (con’t)
Port administration changes the interface speed and auto negotiation settings Assign management IP Address to XenServer (only SSH is allowed) for service virtual machine failure conditions Resources to upload files from a local system to the service virtual machine Event management Task management Auditing Tech Support for the service virtual machine and XenServer
54
Instance Management Start, stop, reboot, remove
Upgrade (single or multiple) Running/saved config Instance resource utilization Audit Messages Service VM Admin user management (without RBA) Port Administration To change Interface Speed To change Auto Negotiation Settings Save Config on VPX Add MIP/SNIP on VPX Modify / Remove VPX Operations (Start/Stop/Reboot) on VPX CPU & Memory Utilization, Throughput (whole system and per VPX) Install SSL Certificates on Multiple VPX instances Upgrade Multiple VPX instances Upgrade Service VM (kernel upgrade is pending) Reboot Nicaea Appliance (including XenServer) When Service VM starts It need to set auto_poweron on itself through XenServer, so that when XenServer is rebooting, it can automatically start Service VM Service VM on XenServer should contain description as “Service VM” so that it can identify Service VM and can set auto_poweron for this VM Event Management Task Management Modify and remove VPX Start, stop, and reboot operations CPU, memory utilization, and throughput (for the whole system and per VPX) Install SSL certificates on multiple VPX instances Upgrade multiple VPX instances Upgrade the service VM (the kernel upgrade is pending)
55
Instance Provisioning
Instance loaded from XVA template repository: Apply Memory Settings Assign CPU cores Apply SRIOV Virtual Functions Assign SSL cores Apply port/interface configuration Install SSL Certificates Assign NSIP, MIP, SNIP Assign VLAN Tagging Add pre-canned root user on VPX Restrictions on this root user through command policy Set Throughput and PPS settings (Rate Limiting)
56
Service VM Internals Service VM sends API calls to the VMs for management tasks. No CLI for Service VM. Memory usage is per VM and therefore per dedicated core. Other monitoring screens are system -based, aggregate usage. Reboot the SDX appliance (including XenServer) When the Service virtual machine starts, it must set auto_poweron on itself through XenServer so that when XenServer is rebooting, it can automatically start the service virtual machine. The service virtual machine on XenServer should contain the “Service VM” description to identify the service virtual machine in order to set auto_poweron on the virtual machine.
57
Consolidation across security zones
58
Simple Consolidation Consolidation within a single zone
Device admin is also admin for all instances For the first example, lets just look at using an SDX to consolidate/provide dedicated instances for a series of apps that all reside in the same security zone. Let’s also assume that the admin for the device is also the admin for each individual instance.
59
NetScaler SDX-11500 Interface Topology
Management Interfaces 1G Data Interfaces 10G Data Interfaces 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 This shows the interface topology for the SDX our most popular SDX device.
60
Simplest Deployment ServiceVM
x (ServiceVM and NSIPs on same network) Instance 1 Instance 2 Instance 3 Instance 4 Instance 5 No sharing of data interfaces 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 Let’s say we’re supporting five different instances. First, since all the instances are in the same security zone, and since its the same admin for everything, there isn’t really any reason not have the ServiceVM and the the NSIP/management interface for all the instances on the same network. Therefore, a single management network on the device is fine. For the data plane, one approach is to just give each instance it’s own dedicated physical interface or interfaces. Remember, since the data plane traffic uses SR-IOV, this traffic doesn’t go through a central virtual switch, so the isolation is very strong. And in this case, each instance can have any/all of the 4096 VLANs available (subject of course to how the rest of the network is configured). Of course, the data plane networks can be completely different networks.
61
Simplest Deployment ServiceVM
x (ServiceVM and NSIPs on same network) Deployments where compliance is not a concern Deployments when all instances in the same security zone Instance density limited to number of physical interfaces Data plane isolation achieved via no sharing of physical interfaces 4096 VLANs per interface and instance Instance 1 Instance 2 Instance 3 Instance 4 Instance 5 No sharing of data interfaces 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 NOTE TO SPEAKER: -Simply recap. -Maybe point out this is the most common deployment -Drawback is that of course in this case density is limited to the number of interfaces on the device.
62
Data Plane Isolation with Shared Interfaces
ServiceVM x (ServiceVM and NSIPs on same network) Instance 1 Instance 2 Instance 3 Instance 4 Instance 6 Instance 5 VLAN6 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 SR-IOV provides the capability to safely share an interface across instances. We talked earlier about SR-IOV providing better performance. That’s actually a side effect of it’s intended purpose, which is to virtualize a single physical interface into multiple virtual interfaces, and do it in a safe manner. First, unlike straight PCI passthrough, SR-IOV is safer. You don’t need to worry about a bug in one of the guests bringing down every guest on the interface. Second, it provides the ability to isolate traffic. Specifically, by providing for VLAN filtering at the interface level, we can ensure that – for example – traffic from VLAN6 is only sent to instance 6 and traffic from VLAN 5 is only sent to VLAN 5. You can test and validate this by doing a broadcast storm against instance 6 and it won’t impact instance 5 at all. VLAN Filtering enabled on 10/4 interface
63
Data Plane Isolation with Shared Interfaces
ServiceVM x (ServiceVM and NSIPs on same network) Need more instances than physical ports Scenarios where conserving switch ports is important Instance density limited only by platform maximum SDX will NOT forward VLAN5 traffic to Instance6 VLAN filtering can be enabled/disabled interface by interface Instance 1 Instance 2 Instance 3 Instance 4 Instance 6 Instance 5 VLAN6 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 NOTES TO SPEAKER: -Simply recap. -key point is that this provides for higher density, and also conserves switch ports. -Note, if LACP comes up, you want to state that: Link agg is supported. It just needs to be manually configured at the instance and at the switch The issue with LACP comes down to the fact that there isn’t a single network point on the data plane to act as the device side for LACP. So if there are two instances sharing a two bonded interfaces, there is contention/conflict around who “owns/controls” LACP. It is, however, something we are looking at. VLAN Filtering enabled on 10/4 interface
64
Simple Consolidation with Delegated Administration
Consolidation within a single zone Different admin for applications Next example looks at a “delegated admin” scenario. Specifically, where the device admin wants to let another admin come in and manage not the device, but just one of the individual instances.
65
From One Management Network
ServiceVM x (ServiceVM and NSIPs on same network) Instance 1 Instance 2 Instance 3 Instance 4 Instance 6 Instance 5 Device admin doesn’t want instance admins on same network as ServiceVM VLAN6 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 So if we go back to our original topology, a couple of things to consider: First, each instance supports all the RBA of any other NetScaler. The device admin can create an RBA profile within an instance for the delegated admin, walling off things he doesn’t want that admin to change (e.g., VLAN settings, ability to go to the shell, etc.) However, in this topology, the device admin would need to grant the delegated admin access to the network that the ServiceVM – which controls the entire device – is on. In some cases that might be fine. But in other cases, that might not be OK. VLAN Filtering enabled on 10/4 interface
66
To Separate Management Networks
ServiceVM Instance 1 Instance 2 Instance 3 Instance 4 Instance 6 Instance 5 x x VLAN6 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 So, we provide the capability to create another network. This shows it on another interface, but it could be on the 0/1 as well. Also, there is the ability to keep the traffic on the device, or to force communication between the ServiceVM and the instances off the device and then back on. We see this when it might be important to send this traffic through an intermediary like a firewall for audit/compliance purposes.
67
To Separate Management Networks
ServiceVM device admin doesn’t want instance admins on Service VM network Deployments when all instances in the same security zone Data plane isolation achieved via either port(s) per instance or VLAN filtering When ports are dedicated, each instance gets up to 4096 VLANs Instance 1 Instance 2 Instance 3 Instance 4 Instance 6 Instance 5 x x VLAN6 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 NOTE TO SPEAKER: -Again, simply recap. -Stress that this isn’t required for delegated admin, but in many cases is desirable.
68
Consolidation Across Security Zones
Consolidate across security zones Each security zone has its own management network Device admin wants to let others administer individual instances Last case extends this to consolidating across security zones. Depending upon the compliance stance, this may require a separate management network for each security zone. That’s what we’ll look at in this example. Note, this isn’t always the case. We have many customers – some in highly regulated and compliance conscious industries – that do have a single management network that spans all their security zones.
69
Separate Security Zones
ServiceVM Internal DMZ x x NetScaler VPX 1 NetScaler VPX 2 x NetScaler VPX 3 NetScaler VPX 4 NetScaler VPX 5 VLAN4 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 Data and Management Plane Isolation to support network segmentation use cases. Support for multiple management networks Separate ServiceVM from NSIPs Separate NSIPs from each other Very strong data plane isolation options Dedicate interfaces to instances Share interfaces with VLAN filtering Share interfaces without VLAN filtering Multiple management networks Supports hierarchical networking Flexible data ports Dedicate interface for a Zone Share interfaces within a Zone Traffic isolation at hardware level MAC and VLAN Filtering
70
Separate Security Zones
ServiceVM Scenarios where compliance is an issue Specifically when compliance stance requires separate management networks per security zone Data plane isolation achieved via either port(s) per instance or VLAN filtering When ports are dedicated, each instance gets up to 4096 VLANs Internal DMZ x x NetScaler VPX 1 NetScaler VPX 2 x NetScaler VPX 3 NetScaler VPX 4 NetScaler VPX 5 VLAN4 VLAN5 0/1 0/2 1/1 1/2 1/3 1/4 1/5 1/6 1/7 1/8 10/1 10/2 10/3 10/4 Data and Management Plane Isolation to support network segmentation use cases. Support for multiple management networks Separate ServiceVM from NSIPs Separate NSIPs from each other Very strong data plane isolation options Dedicate interfaces to instances Share interfaces with VLAN filtering Share interfaces without VLAN filtering Multiple management networks Supports hierarchical networking Flexible data ports Dedicate interface for a Zone Share interfaces within a Zone Traffic isolation at hardware level MAC and VLAN Filtering
71
Licensing
72
NetScaler SDX Licensing
Platform license – entitles base SDX appliance Default 5 instances allowed on certain platforms 5-Instance Add-On Pack license (Instance Pay-Grow) Enables adding additional VPX instances, beyond the default 5 Platform Upgrade license (Platform Pay-Grow) Upgrade to higher throughput capacity on same hardware platform Platform Conversion license Change MPX to SDX (not applicable for FIPS, 9500, 7500, 5500) SDX does not take the traditional, partitioned-based approach to multi-tenancy. Independent VPX instances enable lifecycle independence, without the limitations of partitioning – no monolithic appliance-wide dependencies. Each instance has its own isolated environment: -operating system kernel -CPU and memory address space -services -- routing stack, packet engines, etc. This provides the foundation for the true resource and lifecycle isolation necessary for consolidating. -- NetScaler SDX: NetScaler hardware + XenServer + ServiceVM + NetScaler VPXs with direct access to hardware XenServer provides CPU & IO virtualization ServiceVM provides management services like creation, modify and deletion of VPXs. NetScaler VPXs have direct access to the virtualized HW NIC. So, in an HA pair, we can fail over an individual instance on device A to device B, without having to flop the entire device and every instance on the device. Embedded within this is the ability to have active instance on both devices. We thus pass the litmus test we discussed earlier: -ability to upgrade an instance w/out upgrading the entire device -ability to fail an instance over w/out failing over the entire device
73
NetScaler SDX Licensing Example
Purchased system = SDX 11500 Apply platform license up to 5 VPX instances, max system throughput 8 Gbps Add 3 x 5-pack instance licenses add up to 20 VPX instances – max system throughput still 8 Gbps Apply SDX to-SDX platform upgrade license max system throughput increases to 36 Gbps SDX does not take the traditional, partitioned-based approach to multi-tenancy. Independent VPX instances enable lifecycle independence, without the limitations of partitioning – no monolithic appliance-wide dependencies. Each instance has its own isolated environment: -operating system kernel -CPU and memory address space -services -- routing stack, packet engines, etc. This provides the foundation for the true resource and lifecycle isolation necessary for consolidating. -- NetScaler SDX: NetScaler hardware + XenServer + ServiceVM + NetScaler VPXs with direct access to hardware XenServer provides CPU & IO virtualization ServiceVM provides management services like creation, modify and deletion of VPXs. NetScaler VPXs have direct access to the virtualized HW NIC. So, in an HA pair, we can fail over an individual instance on device A to device B, without having to flop the entire device and every instance on the device. Embedded within this is the ability to have active instance on both devices. We thus pass the litmus test we discussed earlier: -ability to upgrade an instance w/out upgrading the entire device -ability to fail an instance over w/out failing over the entire device
Similar presentations
© 2024 SlidePlayer.com. Inc.
All rights reserved.