Download presentation
Presentation is loading. Please wait.
1
How to use this presentation
12/21/2017 9:46 AM Length 60 Min – can be customized based on presenter preference Key Message Manage resources holistically, with greater flexibility and increased resilience, by delivering networking resources as a shared, elastic resource pool. With an infrastructure controlled by policy-driven software, the software-defined datacenter (SDDC), specifically a Software-Defined Network reduces costs and enables policy-driven efficiency in your organization. Target Audience Developers, IT Pro, ITDM, CxO Demos None Preparation Resources MSDN: Technet: Microsoft MVA: © 2014 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
2
Creating a dynamic datacenter with Windows Server 2016 software-defined networking
3
Datacenter Network Datacenter Microsoft Ignite 2015 12/21/2017 9:46 AM
Spine Switches/Routers Microsoft Ignite 2015 12/21/2017 9:46 AM Datacenter Network Fixed-Function Physical Appliances Edge Routers Compute/Storage & TOR Switches Datacenter networks are complex, and have been built out over the years. Traditionally app deployments require making changes across the entire physical network for switching, routing, load balancing, firewall, and edge services. This is error prone and takes time; time that CXO’s just don’t have to remain competitive. Datacenter © 2015 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
4
Challenges customers face
Agility “I need to onboard workloads with complex policies across my own datacenter and/or the public cloud in days – not weeks – to remain competitive.” Security “I must stop a compromised node from attacking other nodes on my network” Costs “I need to reduce the number of operator interventions and efficiently meet network growth demands. Current practices just won’t scale.” When I talk with customers, I frequently hear about three challenges. First, IT is getting increasing pressure from their CXO that they are the bottleneck in the organization being competitive. The dev team has an app that they want to see deployed, and it needs to get deployed instantly. With the myriad of rich policies the app has – switching, routing, load balancing, quality of service – the number of changes needed on production elements is high, requiring trouble tickets, change orders and so on – causing delays in deployment. Which then causes the organization to not remain competitive. Second, customers are continuing to deal with security breaches and the hard challenge that a perimeter firewall, while useful, is simply too far away to protect the mission critical workloads the customer cares about. And when a breach occurs, the customer is not in a position to easily quarantine that workload. Finally, customers tell us that they need to contain costs. That the number of operator interventions is too high, that the degree of consolidation is still not good enough, and that the Capex costs to acquire and the Opex costs to run the datacenter is high.
5
Microsoft Build 2016 12/21/2017 9:46 AM “ The ability to spin up a software-defined network in about eight minutes while eliminating a $20,000 cost is a huge benefit. “ Chris Amaris Chief Technology Officer Convergent Computing We have been deploying Windows Server 2016 with customers since the earliest Technical Previews, and here is what one of our customers had to say. You can gain these benefits too easily by bringing in our Azure inspired software networking capabilities into your datacenters. © 2016 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
6
Azure Inspired SDN Datacenter Microsoft Ignite 2015 12/21/2017 9:46 AM
Physical Infrastructure Azure inspired SDN So, how do we do it. In Azure, we build overlays on top of the underlying physical fabric to provide greater agility, enhanced security, and reduce costs. Goes without saying that the physical network infrastructure of a datacenter is important. It provides resiliency, multi-pathing, and high throughput. These are the attributes we build into our datacenter network fabric, and then use overlays to gain the benefits of SDN I described earlier. Datacenter © 2015 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
7
WS 2016 Virtualizes the Entire Customer Network for Azure Agility
Internet Switching and Routing Load Balancers Firewalls Edge Gateways Other Physical Appliances Direct Internet Connectivity VPN & ExpressRoute Windows Server 2016 takes the Azure SDN designs and makes it available to customers, specifically it virtualizes the entire customer network – from switching and routing to load balancers to firewalls, edge gateways, and any 3rd party physical appliances. And we provide predictable performance to go with it. Backend 10.3/16 Mid-tier 10.2/16 Frontend 10.1/16 VPN GW AD/DNS With Predictable Performance! Virtual Network
8
Physical Top of Rack Switch
[New!] Network Controller for a Scale out, Control Plane of your Virtualized Network Internet Datacenter Management tool (SCVMM, PS, Azure Stack) Hyper-V Host Hyper-V vSwitch VM Network controller Physical Top of Rack Switch The central automation point for the network is a new network controller, which is responsible for taking any user defined policies and pushing them down southbound to the Hyper-V hosts where the policies are enforced. The network controller is of course highly available, and scales up because it isn’t in the data path of the tenants.
9
Virtual Networks for Flexible Workload Deployment
L2 switching and Distributed Routing [New!] VXLAN encapsulation [New!] OVSDB for provisioning policy [New!] REST API for SDN applications Blue Sales Red Blue Finance Shared DC Fabric Amongst the most important policies deployed by the controller is around virtual networks. Any tenant that is on-boarded into the cloud is placed in a virtual network that provides isolation from other tenants on top of the same multi-tenant fabric. What’s new in the release is based on customer requests, we are supporting VXLAN encapsulation, OVSDB as the southbound protocol, and a RESTful API for configuration. In the picture you’ll see that the red and blue tenants have the same IP address space and communicate over a VXLAN tunnel, with the Virtual Network Identifier letting the vSwitch deliver the packets to the correct VMs. VNI 6001 MAC VNI 5001 MAC VXLAN Tunnel
10
Hybrid SDN Gateways for a cloud on your terms – your workloads, anywhere
Public Cloud Woodgrove HQ MPLS WAN MPLS MPLS Router Exchange VLAN 30 GRE Tunnel VLAN 40 Public internet Contoso HQ In addition to a virtual network running on-premises, customers are increasingly interested in using the power of a public cloud. Microsoft has a unique benefit in running both a public and a private cloud, giving customers the ability to easily move between the two environments. Gateways assist with connectivity across clouds – using either a S2S connection over the Internet, or private MPLS connectivity using ExpressRoute into Azure. Within a cloud, access to non-network virtualized resources is via a multi-tenanted forwarding gateway. A key aspect of our gateways is that we support multi-tenancy. This means that a single VM can act as a gateway for up to a hundred different tenants instead of requiring separate gateway VMs. What’s new in the release is that we support M:N resiliency. This means that you don’t need a failover gateway for every S2S gateway VM that is active. We also support transit routing. If two branch offices have routes to each other and to the hoster– then if one of the routes goes down between a branch office and the hoster, the other route would still be available for use. And as with everything else, we have a RESTful API for gateways too. Hybrid flexibility gives you: Flexible multi-site connectivity with dynamic routing High-speed connectivity to tenant virtual networks over MPLS, Metro Ethernet Access to physical network resources from tenant virtual network Its highly available and efficient: Simplified M+N redundant GW pools for high availability and load balancing Easy scaling of gateway deployment Reduced capex through multi-tenancy Its easy to manage and integrate: Easy deployment through SCVMM Centralized control and management through SDN Network Controller Tenant self-servicing through Windows Azure Pack Integration with existing tenant portals via SDN REST APIs or PS BGP Gateway VM Pool BGP NEW for 2016! M+N Resiliency Multi-tenant Forwarding Dynamic/Transit Routing REST API Gateway (Internet edge) Contoso VNet Woodgrove VNet SQL Farm
11
[New!] A cloud-optimized Load Balancer for cloud infrastructure and tenants
Client Scales up. Bypass MUX for outgoing traffic with Direct Server Return (DSR) Load balances the load balancers! Multi-tenanted. Only one VM for load balancing policies across 100s of tenants and VIPs Stateless. NAT and Probes on the DIP REST API for SDN applications VIP VIP Edge Routers VIP LB MUX LB MUX Tenant Definition: VIPs, # DIPs NAT/ Probe NAT/ Probe Direct Return: VIP Controller Stateless Tunnel Mappings Adding on to VXLAN overlay networks and our hybrid SDN gateways, we also support an L3/L4 load balancer. This is an all software load balancer built from the ground up to satisfy unique cloud requirements – we need to able to create and delete lots of VIPs/DIPs as new tenants/apps are on-boarded, we need to be able to dynamically add/remove MUXs for increased scale, we need to load balance the load balancers themselves, and finally we need to be able to scale up these MUXs. In the picture you’ll see that at the top, we have a client, which tries to access a public IP. The request reaches an edge router. The edge router will see that there are multiple routes advertised using BGP by the load balancer multiplexers. Consequently, it will pick any of the MUXs and forward the packet along. Now the MUX has to do a network address translation, to change the destination address of the packet to one of the VMs that will actually service the packet. Instead of this function happening on the MUX as is the case everywhere, we move this functionality in Azure to the actual host where the VM is hosted – this makes the MUX largely stateless, and gains from better scaling properties. Even better, the return traffic can completely bypass the MUX using a capability we call Direct Server Return. So, if a request comes in to play a large video – the request is small and goes through the MUX, but the response is large and bypasses the MUX enabling it to scale much better. Software load balancer details: L3/L4 Highly Available and Scalable - North South/East West Load Balancing Network Address Translation Optimized for SDN with: Direct Server Return – traffic from a VM goes directly to its destination without intermediate stops through an appliance East-West optimization – traffic to a VM from another VM in the same cloud can be further optimized to bypass the MUX going directly to and from the source and destination. Health probes distributed to hosts. This reduces load on infrastructure dramatically while increasing performance and availability. Azure VMSwitch Azure VMSwitch NAT DIP DIP VM DIP VM DIP VM DIP VM DIP
12
[New!] Internal Load Balancing with 0% CPU utilization!
Client Edge Routers Majority traffic in a datacenter is East-West. Unique Azure design that bypasses the MUX entirely for such traffic. Significant Perf Gains! Throughput gain: ~25% Latency drop: ~35% MUX CPU utilization: 0%! Tenant Definition: VIPs, # DIPs LB MUX LB MUX Controller Mappings ICMP Redirect ICMP Redirect VIP Direct Return: VIP 1st Request DB Tier (DIP) Web Tier (Client) Azure VMSwitch DB Tier (DIP) VM DIP Azure VMSwitch It gets better. As many of you know, most of the traffic in a datacenter is east west traffic – for example, traffic that flows between tiers of an application. We utilize a unique Azure design that enables such traffic, after the control path setup, to completely bypass the MUX allowing it to scale significantly. The most interesting number really is that MUX utilization for the traffic drops to 0%. Such Azure inspired designs are ones that we will keep bringing to our customers. NAT NAT NAT Additional requests Bypass MUX DIP
13
App Deployment Agility
Demo 1: Take a two tier app with web and db Show you can deploy fast into a virtual network with LB policies Show off the internal load balancing with 0% CPU utilization
14
Deployment Agility Start with the App
Tier 3 Deployment Agility Start with the App Tier 2 Active Directory VM Tier 1 File Server 1 VM File Server 2 VM Web Server 1 VM Web Server 2 VM
15
Deployment Agility Create subnets
/24 Subnet3 Tier 3 Deployment Agility Create subnets /24 Subnet2 Tier 2 Active Directory VM /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Web Server 1 VM Web Server 2 VM
16
Deployment Agility Wrap in Virtual Network
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Deployment Agility Wrap in Virtual Network /24 Subnet2 Tier 2 Active Directory VM /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Web Server 1 VM Web Server 2 VM
17
Deployment Agility Provide access to the outside
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Deployment Agility Provide access to the outside /24 Subnet2 Tier 2 Active Directory VM /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Web Server 1 VM Web Server 2 VM Outbound NAT
18
Deployment Agility Make it available to users!
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Deployment Agility Make it available to users! /24 Subnet2 Tier 2 Active Directory VM Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
19
Deployment Agility Done … for now.
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Deployment Agility Done … for now. /24 Subnet2 Tier 2 Active Directory VM Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
20
Layered Security, Protection, and Isolation
Microsoft Build 2016 12/21/2017 9:46 AM Layered Security, Protection, and Isolation SDN Virtual Network Isolation DFW & NSG Appliances Physical Network VM Firewall Cloud Services & “Infrastructure” VM Guest Threat Firewall ACLs DDoS Protection Threat Free tacos – Oh Greg. Let’s look at a classical security model for virtualized workloads in a datacenter. Typically, you’ll have some protection around the guest, and you’ll have some protection at the perimeter of the datacenter. But as we know, threats frequently get in, and once they get in, they wreak havoc. The annual costs of dealing with security attacks is in the hundreds of billions of dollars. What SDN lets us do is bring in additional layers of security – with virtual networks that provide the outermost isolation for the app from other apps, to distributed firewalls that allow for setting Access Control Lists for more granular security to virtual appliances as well. Let’s look at each of these in a bit more detail. © 2016 Microsoft Corporation. All rights reserved. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
21
[New!] Micro-Segmentation to segment your network based on app and security needs
Dynamically segment network to meet evolving security needs. 5 tuple stateful, distributed firewall in both directions. Associated to subnets or NICs Update ACLs independent of VMs For VMs and Containers On-premises 10.0/16 Internet ExpressRoute and VPNs Windows Server 2012 R2 supported a distributed firewall for applying ACLs, and with WS 2016, we now support the notion of Network Security Groups to dynamically segment your network based on app or security needs. For instance, we may choose to put the web servers in a DMZ with a different set of policies than the middle tier or backend. A Network Security Group is really just a collection of ACLs, and so any VM that is attached to a NSG will automatically inherit all the policies that are a part of it. The ACLs themselves can be applied to not just a NIC but the subnet as well – so effectively, you can have a single rule that allows or blocks communication to and from any subnet. In the picture you’ll see that the Internet can speak to the front end, but the front end machines can’t speak to each other. And that the front end can speak to the middle tier, and the middle tier to the back end using a specific ACL only. All these policies are trivially deployed and dynamically updated. VPN GW Backend 10.3/16 Mid-tier 10.2/16 Frontend 10.1/16 Virtual Network
22
[New!] User Defined Routes to route tenant traffic to Virtual Appliances
On-premises 10.0/16 Internet Tenant defined routing tables for virtual networks Enables routing traffic to a virtual appliance Virtual appliance need have no awareness of SDN ExpressRoute and VPNs VPN GW We can go further. In Windows Server 2016, we allow defining a custom route, which can re-direct traffic to go through a virtual appliance. The appliance has no knowledge of SDN – it simply performs its servicing as it normally would. So, in the event of an attack, IT can immediately push out a security policy to quarantine one of the tiers of the app, and have all traffic flow through a virtual appliance firewall, as an example. We share the appliance ecosystem with Azure. As Azure’s appliance ecosystem grows, so does Windows Server’s. Virtual appliances enable additional network functions to be provided from 3rd party technology. The virtual network owner can define the routing table for the virtual network, to send traffic to appliance VMs in the virtual network. Additionally, traffic can be mirrored from VM ports to virtual appliances for deeper inspection and analysis, out of the data path. Mirroring takes full advantage of SDN allowing the VM being mirrored and the virtual appliance to migrated independently between hosts. Any Hyper-V virtual appliance can be used in a virtual network. Appliance does not need to be aware that it is in a virtual network. Backend 10.3/16 Mid-tier 10.2/16 Frontend 10.1/16 Virtual Network
23
[New!] Port Mirroring to mirror tenant traffic
Mirror inbound and outbound packets on a port to a virtual appliance Many ports to one appliance – a single appliance can serve multiple ports. 5-tuple rules to enable a subset of traffic Appliance not in data path of VM-to- VM communication. Packets not modified in any way On-premises 10.0/16 Internet ExpressRoute and VPNs VPN GW Finally, routing traffic can be a bit heavy handed as the appliance is now in the data path of all traffic. We support the ability to mirror both inbound and outbound traffic, filtered using any 5 tuple rule, to a remote port. In fact, we can have many ports remoted to a single appliance. This way, the appliance can be used for any passive analysis of traffic, and if anything suspicious is observed, suitable rules can be deployed to update policies, quarantine traffic and so on. All of this in seconds. Backend 10.3/16 Mid-tier 10.2/16 Frontend 10.1/16 Virtual Network
24
Micro-Segmentation and Dynamic Security
Demo 2: Show the phishing attack Show how you can hop from one machine to another and do evil stuff Show application of ACLs in a segment Show routing to a virtual appliance Show mirroring to ATA Show that you can apply firewall policies not just to VMs but containers as well
25
Application at risk! Phishing for secrets
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Application at risk! Phishing for secrets /24 Subnet2 Tier 2 Active Directory VM Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
26
Application at risk! The attack
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Application at risk! The attack /24 Subnet2 Tier 2 Active Directory VM Private VIP /24 Subnet1 Tier 1 File Server 1 VM N File Server 2 VM Public VIP Web Server 1 VM N N Web Server 2 VM N Outbound NAT N
27
Dynamic Security Micro-segmentation
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Dynamic Security Micro-segmentation /24 Subnet2 Tier 2 Active Directory VM Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
28
Dynamic Security Using the distributed firewall
NSG /24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Back End NSG Dynamic Security Using the distributed firewall /24 Subnet2 Tier 2 Active Directory VM Front End NSG Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
29
Dynamic Security Virtual Appliances
/24 Subnet3 Tier 3 Virtual Network – “MyNetwork” Dynamic Security Virtual Appliances /24 Subnet2 Tier 2 Active Directory VM NSG Virtual Appliance VM Internal VIP /24 Subnet1 Tier 1 File Server 1 VM File Server 2 VM Public VIP Web Server 1 VM Web Server 2 VM Outbound NAT
30
[New!] VMMQ for 40G Ethernet Performance
VMMQ is the fourth generation performance enhancement RSS was in WS2008 VMQ arrived in WS2008 R2 vRSS (VMQ with RSS in the VM) came in WS2012 R2 VMMQ (hardware offload of vRSS) is in WS2016 Finally, we have performance. If you remember our performance journey, in the 2012 timeframe, we started with VMQ – in those days, on those processors, we would get about 3.5 Gbps, though now, we get closer to 7Gbps. In the 2012 R2 timeframe, we introduced Virtual RSS, which bumped up bandwidth significantly to nearly 21Gbps. And in the 2016 timeframe, we are now at about line rate on 40G networks. Note this is not RDMA where we recently showed off 400G bandwidth, or SR-IOV – rather it is traffic that flows through the vSwitch. We’ve not tried this much on a 100G network yet, but we expect the numbers to get better as you’d expect.
31
[New!] Converged NIC for cost optimized Storage and Networking
12/21/2017 [New!] Converged NIC for cost optimized Storage and Networking What we’ve really gone after is reducing cost while preserving such performance. There aren’t many customers who really need 40G into their VMs. What customers want is to maximize the utilization of their NICs. In the 2012 R2 timeframe, we needed separate physical networks for RDMA storage and networking. SMB traffic (which included much of the file server traffic but also things like live migration) would use RDMA on a separate physical network, and then regular Ethernet traffic would be on its own network. This effectively doubled or increased infrastructure costs. Especially as NIC speeds go from 1G to 10G and higher, having separate NICs is a cost center for IT. With WS2016, we converge both RDMA storage and Ethernet traffic on the same underling NICs. The vSwitch exposes an RDMA NIC to SMB that it can bind to for RDMA traffic, while Ethernet traffic can continue to flow as it always has. You may ask, what about Quality of Service – we don’t want RDMA traffic starving Ethernet traffic either. We allow carving out traffic classes for storage and networking, and then allowing the customer to partition their bandwidth meant for compute VMs as they see fit. Customers can apply VM QoS policies on these converged NIC systems to limit how much bandwidth a VM pays for, or provide them with a minimum that they have been guaranteed. To see this in action, here's Don. Windows Server 2016 © 2015 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
32
NIC Applies Bandwidth Reservations per TC
[New!] QoS for predictable storage and networking perf on a Converged NIC QoS per VM in Hyper-V switch Carve out traffic classes for storage and networking Apply QoS limits to limit maximum bandwidth for a VM Use QoS reservations to guarantee minimum bandwidth for a VM RDMA Traffic TCP/IP Traffic TC=X TC=0 NIC Applies Bandwidth Reservations per TC
33
DEMO – Cost Optimized, Predictable Performance
Demo 3 Show RDMA at 40G Show VMMQ at 40G Use DCB to carve out bandwidth for storage and networking Show RDMA now only uses 20G Show QoS on the remaining 20G with different weights
34
Customer Challenges Solved
Agility With the Cloud Optimized SDN Infrastructure in Windows Server 2016, customers can deploy complex workloads rapidly across any cloud. Security With Windows Server 2016, customers can dynamically segment their network to precisely model security needs, while being able to react quickly to breaches. Costs It’s all built in – the network controller, load balancer, firewall, controller, gateways,– everything is included as part of Windows Server 2016 and System Center 2016 When I talk with customers, I frequently hear about three challenges. First, IT is getting increasing pressure from their CXO that they are the bottleneck in the organization being competitive. The dev team has an app that they want to see deployed, and it needs to get deployed instantly. With the myriad of rich policies the app has – switching, routing, load balancing, quality of service – the number of changes needed on production elements is high, requiring trouble tickets, change orders and so on – causing delays in deployment. Which then causes the organization to not remain competitive. Second, customers are continuing to deal with security breaches and the hard challenge that a perimeter firewall, while useful, is simply too far away to protect the mission critical workloads the customer cares about. And when a breach occurs, the customer is not in a position to easily quarantine that workload. Finally, customers tell us that they need to contain costs. That the number of operator interventions is too high, that the degree of consolidation is still not good enough, and that the Capex costs to acquire and the Opex costs to run the datacenter is high.
35
SDN Feature Summary for WS 2016
Network controller [NEW!] Central control plane Fault tolerant Control with System Center VMM, PowerShell, or RESTful API Virtual networking BYO address space Distributed routing VXLAN [NEW!] and NVGRE Network security [NEW!] Micro-Segmentation - Distributed firewall & Network Security Group BYO virtual appliances via user- defined routing or mirroring Robust gateways M:N availability model [NEW!] Multi-tenancy for all modes of operation BGP Transit Routing [NEW!] Software load balancing [NEW!] L3/L4 load balancing (N-S and E- W) with DSR NAT For tenants and cloud infra Performance [NEW!] Converged NIC for both RDMA and Ethernet traffic VMMQ for 40G Ethernet perf QoS for predictable Perf And that’s a wrap. Windows Server 2016 is a huge release for SDN as we bring more Azure infrastructure designs and make them available to customers. The SDN stack in 2016 is the foundation for the Microsoft Azure Stack as well. Consistency with Azure in UI, API, and Services
36
Next steps Learn more: www.microsoft.com/WindowsServer2016
Windows Server Blog:
37
12/21/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.