Download presentation
Presentation is loading. Please wait.
Published byDora Parker Modified over 6 years ago
1
11/12/2018 Infrastructure Provisioning Kenon Owens Sr. Product Marketing Manager Microsoft Corporation Microsoft Virtual Academy Welcome to Infrastructure Provisioning with System Center 2012 R2. My name is Kenon Owens and I am a Senior Product Manager on the Systems Center and Windows Server team, focusing predominantly on virtualization and building your private clouds. Before I was on the Systems Center team I was a computing virtualization vendor so I have grown up in the virtualization space and seen how people have been able to really lessen the costs of building and deploying their infrastructure by using virtualization and now taking that into the cloud is where our customers are going to see tremendous, tremendous cost savings because they are going to build the operational savings. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
2
Agenda Introduction Deploy Compute, Storage, and Networking
11/12/2018 Agenda Introduction Deploy Compute, Storage, and Networking Constructing the Private Cloud Manage Across Clouds Day to Day Operations Architecture Reference What we are going to talk about today in the multiple videos that we are going to have is, we are going to talk about how you are really focusing on different tasks. When we built Windows Server Systems Center 2012 R2 we are really focusing now on not just a particular product or a particular component. It is not just Virtual Machine Manager. It is not just Windows Server, but how all of these things work together and so with Infrastructure Provisioning you are really focusing on how do I take all of these different physical pieces of resources, pool them together and really build something for my organization that they can use. We are going to split this into 3 different videos: 1. The first one is going to talk about how we can use System Center to help deploy your compute, your storage and your networking resources. Here is we are really managing the underlying physical resources, bringing them under management by Systems Center and then being able to use them for your cloud environment. which goes into the next video where we are going to talk about how we can construct all of the different resources that we have and construct them together into a private cloud. How we can take that cloud and we can divvy up these resources to the different individual stakeholders that need access to these resources and then show you what you can do when you start pulling this out across clouds where you are not talking about just on-premises, but maybe working with a hoster or something like that. Lastly, our last video is going to talk about day-to-day operations. I have this cloud configured. I have deployed my compute storage and networking resources. I have built my cloud environments. How do I keep this think up and running? How do I now deploy my services, those things that my customers are really interested in, onto this cloud set of resources? And pull that all together with an architecture reference and how all these thing looks; the bits and bites as far as, you know, VMM and all the things that are necessary to run within VMM. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
3
Customer Needs and Challenges
11/12/2018 Customer Needs and Challenges NEEDS CHALLENGES Central management of infrastructure resources Operational costs are increasing Better abstraction of diverse infrastructure into something assignable pools of resources Utilize both on premises and other resources to lower capital costs Deploy the underlying management architecture How to easily deploy Compute, Storage, and Networking resources Decrease capital and operational costs of infrastructure Use bigger, more capable servers and infrastructure more effectively Protect and use existing investments and infrastructure while taking advantage of public cloud resources Maintain separation of resources in multitenant environments Because of the way we have looked at how organizations are functioning they have different needs and challenges. When we talk about the Infrastructure Provisioning what we are really focusing on are some of these particular needs like how an organization can take all of this existing disparate types of resources that they have and pool them together into something that they can then pool out and divvy and dish out to all the different users that are out there. What this means is that I want to be able to not only use my existing resources, but I want to be able to buy all these new cool wiz-bang features and servers that are out there with the fast cars and whatever and those types of things, massive processors and massive amounts of memory and integrate them into my environment as well. But, if I have all these different types of resources that are out there, how do I make sure that they are being utilized to the most efficient way possible? The other think is that what virtualization did for capital costs, the private cloud and the cloud services are going to do for your operational costs. By adding in things like extreme automation and by adding in things like the ability to pool all of these resources together, I can more easily lower the operational costs of running this private cloud day-to-day, ensuring that I get the best amount of resources for my individual users. So when you look at all these needs and challenges that our customers have, it really focuses on these different topics that we have talked about and we are going to be talking about today.
4
Scenario Summary Deploy Compute, Storage, and Networking
Constructing the Private Cloud Management Across Clouds Day to Day Operations Architecture Reference In this first video we are going to be focusing on the deployment and the management of your compute, your storage and your networking resources. Before I go into that, one of the cool things about Systems Center is that multiple different components of Systems Center work together to help you build your environment and ensure that you have the resources available to build these clouds. But, before I even go into the one that we are going to focus mostly on in these next few videos, which is Virtual Machine Manager, let’s go do a quick little tour of VMM just to see a little bit about what it looks like so that when I start talking about things later on you will kind of have a groundiness as to what it is and where we look at.
5
11/12/2018 Demo Now I am going to switch over to the demo and this demo will be on one of our systems here so let me flip over to it. In this demo here I am going to just quickly give you a tour of Virutal Machine Manager. The reason I am just going to be showing you Virtual Machine Manager in this demo today is that, for the most part, most of the things we do in Infrastructure Provisioning are going to be done through Virtual Machine Manager. I am not going to have to go out to all these different other systems. First of all, the main focus of your application, what you want to get to is you want to have these services and these clouds. Now, a cloud is an abstraction of your physical resources. We will talk about that later on in the videos today, but the cloud is where you are going to keep your different virtual machines and I can have different clouds for different environments like my development environment, my production environment, maybe my U.S. environment or my Europe or my Asia Pacific environment. The cloud is where I am going to aggregate all those underlying physical resources. Inside a cloud it is made up of different hosts and my hosts host the virtual machines and those hosts can by Hyper-V hosts or VMware hosts or Citrix XenServer hosts and inside the hosts we are going to host VMs and services. We will talk about services later on so I will focus on that in a little bit. But, really it is made up of the underlying fabric; the fabric is your management for your individual compute storage and networking resources. If we look at the servers that we have; we have multiple different servers that we are managing. Some of them are managed because they are hosts that are hosting virtual machines like this primary server up here, but other ones are just infrastructure servers that we use for things like the library or we use for storage and networking resources. So, you have all the different hosts that you are connected to inside of there and they are stored in different things like different host groups. When you go onto networking and compute storage and networking management, the networking is where we are going to focus on things like our different logical network environments. These logical network environments help me to support a multitenant environment which allow me, as we will talk about later on in this talk today, to really branch out and support multiple different tenants and resources within our environment. Lastly, we have this Storage Management. What Storage Management allows me to do is create classifications and pools of existing storage for allocation to the underlying physical hosts. Now, I can take LUNs that are created and I can manually create them here inside of Virtual Machine Manager, attach them to my physical host, and then use them for virtual machines. So, the Fabric Management is what we are really going to focus on in this first video and then the other videos are going to focus on these other pieces like the VMs and services, we have the fabric, we have the library and the library is where we store our different template configurations for both virtual machine templates as well as the services that I have talked about. Lastly, we have jobs and settings which we use to create things like user roles and such like that. This was a quick tour of Virtual Machine Manager. Now, I am going to switch back to the presentation and we will focus on the different topics that we were talking about today which is the compute, storage and network management. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
6
Deploy Compute, Storage, and Networking
7
Compute
8
Deploy Hyper-V onto Bare-Metal Servers
System Center Marketing 11/12/2018 Deploy Hyper-V onto Bare-Metal Servers Deploy a brand new machine with the hypervisor enabled through the baseboard management controller Deploy Deep-discovery to inventory potential host to determine hardware inventory for post install configuration Discover Ensure hosts are deployed with the approved OS configurations including virtual networking and NIC teaming Approved Configurations Let me hop back into the presentation here and we will now focus on the deploying, the compute, the storage and network resources. All right, so if we talk about the deploying, compute, storage and networking, the first thing we are going to talk about is working on the underlying compute fabric. So, when we talk about the compute fabric here, what we are really focusing on is the underlying hypervisor. With Virtual Machine Manager with Systems Center 2012 we introduce this capability to deploy Hyper-V directly onto bare metal servers from Virtual Manager itself. So now Virtual Machine Manager can be that tool that allows me to connect to a physical serve that has nothing deployed to it. You have just plugged in, say, the out of band management card and we can deploy an OS on it, boot it up, set up Hyper-V on that system and get it up and deployed and running. With 2012 SP1 and 2012 R2 we have made some improvements on what types of things you can do within that server and how you can configure it for things like setting up your network teaming, connecting to your fibre channel SANs or anything like that. With SP1 we introduce this think called deep discovery. Now deep discovery will go out and it will query the machine. It will learn about what physical devices are connected to the machine so that you can make an intelligent choice as to which network adaptors maybe are teamed together, which ones are set up for networking, which ones are set up for virtual machines and that type of thing. So we now can deploy a brand new server by doing a deep discovery and use our improved configurations for deploying that server. In other words, I will take a profile of a host and I will use that profile to deploy across all these different servers that I have out there and that allows me to do things like, say, maybe I have 100 splayed servers and I want to deploy Hyper-V on all those different systems. I can create this configuration for those systems because I know it is going to be these exact same nicks are in this and these things are teamed up over here and maybe these were used for management and I can just then boot up the system, configure it and deploy it so let’s see how that actually works. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 8
9
Bare-Metal Deep Discovery in Action
Bare-metal server WDS server 4 2 3 5 If we go to this next slide here, what we are really focusing on is, I have this bare metal server. I plugged it into my network. I hooked up the out of band management, you know if it is HP, the iLO, if its Dell iDRAC or whatever system I have for that out of band management, and I turn it on. So VMM server goes and it talks to the out of band management server and it tells that server to reboot. When the serve boots up it does an F12, basically it does a PXE boot and talks to the WDS server. The WDS server then authorizes that server against VMM. VMM says, yes, that is the right server, boot it up. Well, WDS server throws down a Win PE image onto that system. The system boots up and then runs a couple of set of calls to collect the hardware inventory of that system. Now all of that hardware inventory is gathered and is then passed back to Virtual Machine Manager so that you, as an administrator, can decide which nicks you want to connect to, say in the management network; which ones you want to connect to in VM networks, which ones you want to team together and those types of things. Once you have done that deep discovery and you have decided which ones get assigned to what, you then go through the rest of the provisioning process. VMM server 6 1 OOB reboot Boot from PXE Authorize PXE boot Download VMM customized WinPE Execute a set of calls in WinPE to collect hardware inventory data (network adapters and disks) Send hardware data back to VMM
10
Automated Bare-Metal Hyper-V Deploy in Action
System Center Marketing 11/12/2018 Automated Bare-Metal Hyper-V Deploy in Action WDS server Bare-metal server Contoso 2 8 4 Hyper-V server 3 VHD Drivers Host profile VMM server Host group Hyper-V server Host group 5 1 Hyper-V server 9 At that point then it again takes the machine, does another reboot of the machine using the out of band management controller. At that point it boots up again, does a PXE boot, talks to the WDS server which authorizes against VMM. VMM then throws down the Windows PE Image and at that point now we start doing some of the customizations. Any scripts as far as configuration partitions, doing any type of pre OS Deployment configuration that we have to do. Once all that is done and we have, let’s say, the C Drive created and partitioned and formatted out there, we will then copy down a VHD file and that VHD file will then be the system that we will boot from when we boot up this machine to be running Hyper-V. So it will copy the VHD down. It will copy down all the drivers necessary so if this is, say, and HP or a Dell system and it needs to custom SCSI or RAID driver or it needs a fibre channel adaptor or driver for the fibre channel connection or something like that, we can copy all those drivers down to the machine. After we finish copying all those things down to the machine, we will then customize the machine, give it the machine name that we want it to have this machine on the network on it. It will do a domain join to the domain, in this case the Contoso domain. After that, every time a domain joins it has to do a reboot so we will enable Hyper-V. We will do a reboot and then at a time after it has rebooted it will now be a part of the domain, have Hyper-V role enabled. It will be part of our host group and then at that point we can then create a cluster or whatever we want to do with it. In this case, the last thing is if you have any post install scripts that you want it to run you will be able to run those. Afterwards you will be able to then manage this thing as a full managed host within VMM. This now gives you a single point of being able to take a physical system, deploy an OS on it in a configuration that you have approved using the correct mappings of the adaptors to the proper networks and those types of things and putting it all together so that from one centralized location you can have this machine go from zero to a fully functioning Hyper-V host managed within VMM. Library server 10 6 7 OOB reboot Boot from PXE Authorize PXE boot Download WinPE Run generic command execution scripts and configure partitions Download VHD Inject drivers Customize and domain join Enable Hyper-V Run scripts post installation © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
11
Multi-Hypervisor Infrastructure Management
System Center Marketing 11/12/2018 Multi-Hypervisor Infrastructure Management Add domain joined hosts Add non-trusted hosts Bare Metal Deployment Windows Server 2012 R2, , 2008 R2 2008 Windows Server Hyper-V Add Hosts through vSphere connection Split hosts into different host groups VMware vSphere 4.1, 5.0*, 5.1 XenCenter not required Split hosts into different host groups Citrix XenServer 6.1 Citrix XenServer Now the thing about VMM is that we decided that it is not just Hyper-V that we are going to manage, but we are also going to manage the other hypervisors that are out there so we support VMware vSphere and with Systems Center 2012 SP1 we supported vSphere 4.1 and 5.1, but with R2 we have added vSphere 5.0 supports so now we support all the latest versions of VMware. We have also updated to support Citrix XenServer Now one of the caveats with VMware vSphere as you will see in this next slide is that we manage vSphere through vCenter. In other words, I manage my Hyper-V host directly. I manage my Citrix XenServer host directly, but to manage my VMware vSphere environment, the ESX servers I have to manage them through vCenter. * New in System Center 2012 R2 Virtual Machine Manager © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 11
12
Support for Multiple Hypervisors
System Center Marketing 11/12/2018 Support for Multiple Hypervisors Virtual Machine Manager Host group The nice thing about the way we support these multiple different hypervisors is that once we are managing them under VMM we treat them very similarly as far as how we deploy virtual machines and services to these different hypervisors so one of the cool things that we have done with Systems Center 2012 is the ability to take these different hypervisors that are out there that we are managing and add them into either their own host group or combine them into the same host group because there are times, even though I have different hypervisors, I may have a common need on how I want to manage them. Maybe I want dynamic optimization to work a specific way across all these different hypervisors and because of that I am going to include them in the same host group. Some people put them in separate host groups. Other people combine them. It is your choice and works well both ways. The nice thing is, is by treating these different hypervisors as just hypervisors I can do things like deploy a service and, dependent on how I have created that service template, I can deploy that same common service to, say my Hyper-V systems, my Citrix XenServer systems or even my VMware systems or across systems. Because everybody knows that most services or applications are not just 1 virtual machine, but there are usually multiple tiers and each tier may have multiple different virtual machines inside of that tier; load balance web server, cluster file server, or something like that. So I may want different tiers on different hypervisors and I can set up VMM to allow me to provision an environment that supports that as well. Microsoft Hyper-V VMware vSphere 5.1, 5.0*, 4.1 vCenter Server Citrix XenServer 6.1 * New in System Center 2012 R2 Virtual Machine Manager © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
13
11/12/2018 Storage In this past few minutes we have talked about managing your compute resources. Now we are going to start focusing on the storage resources. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
14
Utilize Storage More Effectively
System Center Marketing 11/12/2018 Utilize Storage More Effectively Create associations between storage and VM through reconciling data from Hyper-V and storage arrays Identify storage consumed by VM, host, and cluster End-to-end mapping Add storage to a host or cluster through masking operations, initialization, partitioning, formatting, and CSV cluster resource creation Add storage capacity during new cluster creation Capacity management Create new VMs taking advantage of the SAN to copy the VHD Utilize SMI-S copy services and replication profiles Deploy to host or cluster at scale Rapid provisioning Here, this was a new thing that was introduced in System Center 2012 with VMM 2012, but it has really been improved over time with SP1 and now with R2 in that we are allowing you to use and manage your storage more effectively. You don’t need to go back to the storage people every time now that you need a new line assigned to you. Wouldn’t it be much nicer and easier to say, hey, can you give me a pool of disks. Give me 3 terabytes and let me divvy it out and allocate it to the different pieces of storage for my hypervisors as I see fit. So what we have done with VMM is allow you to leverage the fact that we can provide this end-to-end association between my underlying physical host resources and the stores that these virtual machines are residing upon. Now I know from the host side that it is talking to, say, a fibre channel SAN environment through this HPA to that fibre channel SAN, to that LAN that is assigned on it, to that VHD file. Or I can go the other way, from the SAN environment I can look at the VHD and see it is conncected through the fibre channel to this particular host and the masking or the unmasking of the host between that physical host and the LAN that it is talking to on the SAN allows me to really get a good view and know exactly which servers are talking to which LUNS and need to be able to access which shares, whether it is via a LAN for fibre channel or a SCSI or a file share or any of that. It also gives me the ability to see how my capacity is running right now and provide some capacity management. When I created these CSVs on these LANs, if I am doing things like the provisioning to know that I am started to run out of disk space or something like that and to understand how I can add new storage capacity or remove storage capacity as I see fit as I need it, not waiting on somebody else or some other team to be able to give that to me. Then it helps me in things like rapid provisioning so if I have the ability to support ODX or offloaded data transfer via the SAN, let’s take advantage of it. With R2 we have really improved the capabilities on what types of things we can take advantage of with ODX inside a VMM. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 14
15
End-to-End Mapping Unified storage management API
VMM integrates with Windows Server 2012 storage management API. SMI-S, SMP, and Spaces* storage devices disk and volume management iSCSI/FC/SAS HBA initiator management Windows Server discovery for Hyper-V hosts HBA Initiator ports (FC, iSCSI, SAS), volume, disk, NPIV, MPIO Storage Monitoring Indications/eventing – SMI-S service subscribes to CIM lifecycle Indications and alert indications to keep cache in sync Monitoring of thin provision threshold alerts from disk (sense codes), health view showing impacted VMs, capacity trending reports So VMM provides that end to end mapping where it goes from a unified Storage Management API so we are leveraging the SMIS protocol to manage my environment, but whether the device is for SMI-S, SMP, or is a spaces environment from Windows Server 2012 R2 we can support them and do the same common tasks across all of these different types of storage the same way from within VMM and VMM will handle that for me so a new support within R2 is the ability to support spaces storage devices and we support it in a couple of different ways and we will talk about those over the next few slides. One of them is the ability to understand a file share and create a file share as a classified piece of storage just like an iSCSI or a fibre channel SAN type storage environment. Other things that we do is that we support discovery of the Hyper-V hosts and the different HBAs and the different devices that are connected to them so that when we deploy a virtual machine or we create and configure a physical host and we want to connect to a piece of storage we can do that. We can unmask the LUN so that the storage is accessible by that Hyper-V host and they can talk to each other and we can talk to the different pieces that will allow us to support that. Storage Monitoring: So, we support the things like indications and inventing from the SMI-S protocol meaning that if I did something like created a LUN outside of VMM and assigned it to a pool that VMM has access to, VMM will be able to notice that line was created and be able to take advantage of it so I can manage the storage pool through VMM and, you know, for most parts I would want to do it that way, but if it does get created outside of VMM I could still leverage it from within Virtual Machine Manager. We also have the ability for being able to discover SAN and storage devices so we can talk to the SMI-S providers for your different storage partners and then those devices we will be able to query them and find out what storage is attached to them that we can have access to, that we can deploy LUNs and those types of things for. So we do that for your SAN devices. We do it for your iSCSI devices. We also do it for your file servers like your Windows scale-out file server which is new with R2. Not only do we do the discovery of that, but we can also do things like do deployment of a scale-out file server and then manage the storage spaces behind that. Storage Discovery SAN discovery – FC, iSCSI, SAS NAS discovery – Self-contained NAS, NAS Head File Server discovery – Windows Scale-out File Server* SAN / NAS / File Server Discovery levels * New in System Center 2012 R2 Virtual Machine Manager
16
Expanding SMI-S Support
System Center Marketing 11/12/2018 Expanding SMI-S Support Enables the discovery of storage and mapping to virtual environment. VMM relies on storage providers that plug into SMAPI SMI-S CIMXML: Netapp, EMC, HP, IBM, Dell (Compellent), Fujitsu, Hitachi, Huawei, StarWind, LSI (Engenio) SMI-S WMI: LSI (MegaRaid) SMP WMI: Dell (EqualLogic), NexSAN Storage management providers Remote storage providers inform clients of changes in near real time, updating higher level cache engines to improve discovery performance Host ComputerSystem Array StoragePool StorageVolume Masking SCSIProtocolEndpoint StorageHardwareID SCSIProtocolController Lifecycle indications Management of iSCSI SANs that create new iSCSI targets with each new storage logical unit. VMM automates the creation of storage, discovery of portal, and initiator logon (e.g., Microsoft iSCSI target) Management of SAS connected storage including discovery and provisioning Enhanced iSCSI/ SAS support Over time, between 2012 SP1 and now R2 we have really increased and expanded our SMI-S support so not only the different types of SMI-S providers and such that we support has just increased depending on the different SAN or iSCSI vendors that you have, but we have also added new devices and stuff so one of the things that we have done besides managing these different storage providers is we have added life cycle indications and life cycle indications allow us to understand what is happening in the storage environment so that we don’t accidentally delete a line that we are not supposed to. We are not going to delete a line that we never created, those types of things. We are also going to see changes that happen and we will be able to perform the clients that we are managing of those changes so that if a new LUN was exposed we could then mask that LUN or unmask that LUN for a host and then a host would then be able to use that as extra storage, say another CSV or something like that within the file share or the Hyper-V host or something like that. Then lastly, we have enhanced our iSCSI and our SAS connected storage devices so our storage spaces and those types of things and with 2012 SP1 we added support for the Microsoft iSCSI target so now what that means is that if I am using the Microsoft iSCSI target as that backend storage then I can deploy LUNs to it. I can create LUNs from it. I can attach those LUNs to the different Hyper-V hosts and I can effectively manage that storage, and all from an internal Microsoft solution at that point. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 16
17
Storage Capacity Management
Automation works for hosts, clusters, VMs and service instances Connect iSCSI and fibre channel array iSCSI – Add iSCSI portal and logon to iSCSI target ports (works for Hyper-V host/cluster) FC – add target ports to zone (works for Hyper-V host, cluster, VM, and service instance) Provision block and file storage Fibre channel fabric* Zone management Zone member management Zoneset management iSCSI/FC/SAS Add capacity Remove capacity So expanding that storage SMI-S support, but we also are able to better manage the capacity that is on the storage devices. In other words here, we know what devices that we are connected to and we can build in automation to understand which servers and hosts that we are connect to at this point and we can manage the underlying physical fabric that is connected there. What this means is that I can talk now, not only to the storage devices, like the EMC SAN or the HP SAN or whatever, but I can also talk to the switches that are out there. Within those switches I can do some zone management and I can add these Hyper-V hosts as members of a specific zone so that they can have access to the storage or things like that. We have iSCSI fibre channel and SAS capacity management where we can add capacity or remove capacity from what we are managing. In other words, I have a piece of storage that is available to me, I can allocate pieces of that as different LUNs and I can assign them to specific hosts and then I can pull those away if I want to. With Windows Server 2012 R2 we have added better support for file servers so file servers are now a storage- managed entity and I can create new file shares. I can connect to file shares. We will handle all the ACLs, the access control to those file shares from the host so that means that I have created a new file share on a file server and I want to add a Hyper-V host to that file server. When I add that file share to the Hyper-V host it will setup the permissions so that the Hyper-V host will be able to store virtual machines up on that file share. We have also made support in managing the new VHDX file format; not that we didn’t support that, we supported that with SP1, but with R2 what we have done is we allow you now to support some of the new capabilities that VHDX supports like shared VHDX so we will take advantage of shared VHDX from with inside a VMM. We will allow you to use VHDX and shared VHDXs as part of the backend storage for a tier within a service template and things like that and we will talk a little bit about that later on in these videos. File server* Storage node provisioning File server cluster management Storage pooling File share* Add capacity Remove capacity VHDX Model templates Deploy services Expose shared storage* * New in System Center 2012 R2 Virtual Machine Manager
18
Storage Allocation Process
System Center Marketing 11/12/2018 Storage Allocation Process Virtual Machine Manager Discover storage through SMI-S provider Host group Create storage-classification pools and associate with storage SMI-S provider Allocate storage to specific host groups So now that I have the storage all set up and managed, I am going to walk you through how our storage allocation process works. So, Virtual Machine Manager will look through our SMI-S providers and it will connect to the different storage that is out there. This storage could be multiple different types of storage. You know, I could have an iSCSI environment. I could have a file share. I could have fibre channell environment. Through the different SMI-S providers I will be able to access and see all of these different pieces of storage. Once I have done that I will be able to create these classifications and pools and so I could create different tiers of storage. Maybe I have some really expensive fast storage that I want to call my tier 1 storage and that tier 1 storage could be in my primary DataCenter. Well, maybe I have a secondary DataCenter and I want to deploy the same type of service, but the tier 1 storage over there is not as fast. That is okay. I can create this classification called a tier 1 storage, put these different storage pieces into that same classification so when I deploy a virtual machine in my production DataCenter it goes thorough the tier 1 storage there which is the HIOP storage, but when it goes to the development DataCenter or the other DataCenter and it needs to be deployed to a tier 1 piece of storage it gets deployed to the right storage, though it may just not be as fast as the other tier 1 storage and I can have 2 different devices have different classifications depending on the types of disks. Maybe I have fast disks in one storage pool and I am going to put that in my tier 1 environment and maybe I have slower disks in another pool and I am going to put that as my tier 2 environment, so I can really break up the storage as I see fit as an administrator. Once I have created these classifications and associated it with the different storage that I am managing I can allocate those different tiers to the different host groups that I have out there. By allocating those to the host groups then the hosts within there that have access to that storage I can start allocating the storage directly to those Hyper-V hosts. So, if I had an existing LUN out there I could take that LUN and assign it to that Hyper-V host or that cluster of Hyper-V hosts or I can use, through the SMI-S protocol, I can use the ability to create a new LUN or create a new storage space and that would be talking directly to the space and once I have created that space I can then assign it to a particular set of nodes or clusters and have he virtual machines be able to be deployed upon that piece of storage and I can do this from the different tiers and classifications of storage that I have out there. Assign existing LUN/Space* to hosts and clusters Create LUN/Space* from pool and assign to hosts and clusters Tier 1 Tier 2 * New in System Center 2012 R2 Virtual Machine Manager © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 18
19
Provision Low Cost Scale out File Server*
11/12/2018 2:09 AM Provision Low Cost Scale out File Server* Host group Bare metal deploy operating system Create scale out file server cluster Create storage pools Create file share Assign file share to Hyper-V host Authorized Hyper-V hosts Scale Out File Server Cluster Physical or virtualized deployments Windows Virtualized Storage So one of the new things that we added within Windows Server 2012 R2 is better support for storage spaces and creating these scale-out file servers. With VMM 2012 R2 we have increased the capabilities of it and not only do we deploy bare-metal Hyper-V systems, but we can now also deploy from bare-metal scale-out file server. What this means is that I can take a physical set of servers backed up with a JBOD and I can create a scale-out file server from there, from bare metal, all using Virtual Machine Manager. I can either do it from bare metal or if I have an existing set of servers out there that are already connected to a JBOD I can just bring those under management. So, I can create a storage pool from bare metal. I can create spaces on top of that pool. I can attach them to the file servers and create a cluster and take that scale-out file server cluster; all of this from bare metal all the way up to the fact that I have this scale-out file service created and these scale-out file servers are clustered together and they have created shares and those shares are now allocated to the Hyper-V hosts that are available. So I do a bare metal deploy to the OS, create the scale-out file server cluster, create the storage pools, create the file share within those storage pools and assign that file share to the Hyper-V host, all within VMM and that allows me now to bring it all together and I can have my Hyper-V host managed and bare metal deployed there. I can have the file servers bare metal deployed and created that the Hyper-V are going to be storing their virtual machines upon and do that all from within Virtual Machine Manager. Storage Space Storage Space Storage Space Storage Pool Storage Pool Physical Storage (Shared) SSD, SAS or SATA * New in System Center 2012 R2 Virtual Machine Manager © 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
20
11/12/2018 Demo Now let’s flip over to a demo and let me show you a little bit about the storage that we have configured within this environment so that you can see what is going on within this system. That existing environment that we were looking at earlier and under the fabric tab we have this storage piece of it in the fabric workspace. Now, inside of storage we have the different classifications and pools and, like you can see here, I have 2 different classifications. I have an infrastructure classification which doesn’t have any storage attached to it and I have this tier 1 classification which has a file server attached to it. That file server has 24 terabytes of space assigned to that pool and inside of that pool basically 4 terabytes have been used and the rest of it is available. Now where does FS 01 come from? This file server here comes from the different file servers providers that we have that are available to us. We can see here that I have a scale-out file server that I have created and this scale-out file server has disk space assigned to it and that disk space is what I am using for the different pieces of storage. If I right click on that scale-out file server and we click on manage pools you can see that is where FS 01 comes from. It has 6 disks attached to it so this is the scale-out file server. I have created a space with 6 disks and everything seems to be running okay with that piece of it right now. Well, the fact is, is that file server is a clustered storage space which has 24 terabytes of capacity and I am only using 4 of that because I have mirrored those 2 terabyte LUNs that I have are in a mirrored environment so, you know, I am not going to lose the data in that respect. If I look at the properties of that storage space we can see that I have configuration of that where I have different pools inside of here and I have classifications. One thing I may want to do is, maybe I have added more hard disks to that pool of storage or to that JBOD that this storage is accessing and I want to be able to take advantage of that new set of disks. So, I need to add more resources so that I can then create a file share on top of that. How I do that is I basically need to go through and just do the configuration from within Virtual Machine Manager which I will show you here. All right, so I want to add resources and I will have already added the storage devices and that is where I can add my file server, like if I wanted to add a new file server that was out there or a SAN or a fibre channel device. But, what we are going to want to do now is really take that storage space that we have added previously and we are going to want to create a new space for those additional disks that we have added. So, inside of there I will go to the file server section and if I right click on that scale-out file server and I choose manage pools what we will see inside of here is that I can create a new pool and this new pool has the different disks that I have not already allocated to any particular storage space environment and I will just take the next 2 disks in the list here and I will hit create and before I hit create I will have to give it a name. We will call this one FS 11 instead of file server 01 and we will hit create. Let’s give it a classification. Again, I am going to put this one in infrastructure instead of tier 1 because I can assign different classifications for the different storage environments. As you can see here, I have FS 01 which has 6 dated disks and I have this new one called FS 11 which has 2 dated disks. Hit OK there. Now we have a pool available for us. If we create a new file server inside of there or what we can call a logical unit we can choose that FS 11 pool and, because it had 2 disks assigned to it, it has 8 terabytes of space available for it, we are going to give his file share right here, we are going to call it VM Data 11 and we are going to give it 4 terabytes of space. I can create a thin storage logical unit or a fixed sized logical unit and because I don’t want to run out of disk space by thin provisioning or anything like that, I am just going to create a fixed sized storage logical unit for this environment. Hit OK and it is going to take up 4 terabytes of space on that storage pool. Now we can see that there is a new file server share that is going to be available to me and if I go under my classifications and pools we can now see that not only do I have FS 01 with the 3 different logical units there, but I have my FS 11 here and it is creating the pool. If we bring up the jobs window we can see whether it was successful or failed. We were successful in creating the new pool, but we had a problem creating the actual share because of code here so we can go through that and fix it if we need to, add that new pool and create a new LUN. We can go through the steps again, but basically at that point we now have a pool that we manage and, after we fix whatever challenges that we had there, we can create a file share that we can then assign to the different physical servers that are out there. That is where you would manage your storage environment and whether I have both file servers or I have different arrays or I create a different provider where these 2 providers are for storage spaces or Windows file servers, but I could add one for my fibre channel SANs that are out there, add another one for my iSCSI SANs and those would be all available then as managed stored devices that I can use for allocating storage to my different individual Hyper-V Servers. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
21
Networking
22
Networking and Isolation in the Private Cloud
System Center Marketing 11/12/2018 Networking and Isolation in the Private Cloud Standardized services Development Delegated capacity Production VM Networks Cloud abstraction Datacenter one Datacenter two Logical and standardized Now we are going to flip back to the presentation and we are going to talk about networking. All right so here we have really focused on the compute and the storage environment and now what we are going to focus on is, what kinds of things can we do with Windows Server 2012 R2 and Systems Center 2012 R2 to support the new networking aspects that Windows Server has added with Windows Server 2012 and then Windows Server 2012 R2. Within networking, with Systems Center 2012 and VMM 2012 we had these physical sets of resources that we have pooled together and allocated to our different environments and we created these things called logical networks. We have added more separation or segregation between how the underlying physical host looks at the network and how the virtual machines look at the network in this thing called a VM network. That was designed to support these multi-tenant environments. So, I have this logically grouped set of resources and I have underneath these networking pieces that are the logical networks that the physical servers look at, but then I have also created now these VM networks that I store up in the different cloud environments that I have that I use for the multi-tenant isolation or the separation of the different virtualized environments. So you saw a new capability or a new segregation inside of 2012 SP1 of this thing called logical networks and VM networks. Well, with R2 we have really made that separation because we need that isolation for supporting things like Hyper-V network virtualization. Logical Networks Diverse infrastructure Development Production © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
23
Hyper-V Network Virtualization
SQL Server Web Blue sees Tenants with overlapping IP Address range share same physical network Packets isolated using embedded Subnet IDs Host address and SubnetID uniquely identifies individual VM Policies enforced at host level using PowerShell or System Center Virtual Machine Manager Supports L2 learning letting customers bring their own DHCP server, have locally assigned IP addresses for IPv6 and tenant control of IP address within their VM Supports guest clustering SQL Server Web Orange sees CUSTOMER ADDRESS SPACE Underlying design n.n PROVIDER ADDRESS SPACE (PA) If we look at Hyper-V network virtualization which came about in 2012, but it has really been enhanced in Windows Server R2, what does this exactly mean and how does VMM in Systems Center take advantage of it? Well, first of all I wanted to talk a little bit about what Hyper-V network virtualization is and it starts off with a tenant environment so I have this group or this company or whatever and we will call it the blue company. They have, say, a SQL server and a web server. That SQL server and web server have particular IP address schemes. In this case it is for the SQL server and for the web server. Well that physical environment and what is underlying the physical, I mean that virtual environment is what the virtual machines see, but the physical environment, maybe it is on a network. So, even though I have this virtual machine running on Hyper-V that is a SQL server running on Hyper-V 1 and a web server running on Hyper-V 2, they needed to talk to each other via this X network. I may have this provider address space in my datacenter, and think of that as the logical network, that is and so I need to have this separation and be able to route the packets in that network without it effecting with it running on the physical wire that is a network. So, we create these provider addresses like in this example of and so these are on 2 totally different subnets these Hyper-V hosts, but the provider address allows me to support that. We can route the packets between them and so we know that is on the host that is .10 and .2 is on the host that is .12 and they can talk to each other and everything is good because we have network virtualization. We are using NVGRE or network virtualization generic routing encapsulation. We are taking those packets. We are encapsulating them. Passing them around the physical wire and then de-encapsulating them and passing them up to the virtual machine. Everything is working good. Everything is working great. What happens if we add another environment in there? We might have a little challenge in that I am now adding a totally separate network environment so we have blue here and we are going to add this new company, this orange company. This orange group, they have the exact same networking ischemia. In other words, they have a SQL server that is running on and a web server that is running on and I want to ensure that both the blue network only sees the blue network servers and the orange network only sees the orange network servers, but I have to make sure that they are all running on the same physical set of hosts so we are co-mingling those hosts. We do that through NVGRE, and that disassociates and separates the 2 different networks and virtualization is happening so the virtual machines are able to talk to each other, but they are not going to see the other network chats so we have kept the networks isolated from within each other and we are allowing different tenants to use these different networks and the same physical network. That is where you can see the stuff up top, that x networks for the blue network would be 1 VM network. For the orange network would be another VM network. The underlying physical addresses, the or .11 or .12 or .13 or whatever is the logical network and that is how VMM makes that association between the networks above and the logical networks below. We will talk a little bit more about it and I will show it to you in a few minutes. Hyper-V 1 Hyper-V 2 SQL Server SQL Server Web Web
24
Isolation Physical separation
11/12/2018 Isolation Physical separation Physical switches and adapters for each type of traffic Layer 2: VLAN Tag is applied to packets which is used to control the forwarding Layer 2: Private VLAN (PVLAN) Primary and Secondary tags are used to isolate clients while still giving access to shared services. Network Virtualization Isolation through encapsulation. Independence from physical address space. This allows me to provide isolation between the different environments and you can provide isolation in many different ways. I can provide isolation by having physical separation. In other words, I have these 2 blue and this orange network on 2 totally separate networks. This blue network only uses these adaptors or these particular Hyper-V hosts and this orange network only uses these physical adaptors or these Hyper-V hosts and so they are physically separated from each other. But, that doesn’t give me the best utilization of all of my resources. I can use VLANs and VLANs allows me to create, well, this network over here can use VLAN 26 and this network over here can use VLAN 47 and since they own totally different VLANs the network traffic is totally isolated from one another that way. But, that becomes hard to manage when I have many different servers, many different companies, and I want to have all these different VLAN environments and I have to know which VLANs are for which networks or attached to which hosts and set up the routing between all these different hosts so it can be a challenging thing for our customers to support. I can create a PVLAN which gives me this secondary tag that helps me isolate this even further, but it still becomes a headache when I start expanding this out to many, many, many different hosts and many, many, many different guests. By using network virtualization I am able to more simply create this isolation and have that isolation fit through, across all these different servers that are out there and that isolation can cross subnets and those types of things as well and I can create this split between the two. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
25
11/12/2018 VLAN Isolation Defines a layer 2 broadcast domain, achieved by tagging packets to tell switch where it can go Benefits Very mature and reliable technology Universally adopted Well understood Limitations Limited VLAN capacity on each switch and port (4095 max) Limited machine capacity on each VLAN Limits migration of machines High management overhead If we do look at VLANs we can have this separation between the physical environment and the network virtualized environment by creating a virtual LAN, but I am maxed out for any 1 switch or port at 4,000 or so VLANs that are out there and I have to ensure that I have the right VLANs created in setup on each of the different ports that the different Hyper-V hosts and networks are going to be connected to. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
26
Private VLAN (PVLAN) Isolation
11/12/2018 Private VLAN (PVLAN) Isolation VLAN pairs used to provide isolation with small numbers of VLANs. VMM 2012 SP1 only supports creation of isolated PVLAN VMs Primary VLAN Promiscuous If I move over to private VLANs or PVLANs, as we talked about, I can create that isolation. I can create some of that separation and by having a secondary tag with this PVLANs I can even split out things so that the virtual machines can even, within that same virtual machine, be isolated as far as traffic from one another and those types of things. VMM 2012 SP1 allows you to support the creation instead of isolated VLANs with PVLANs so you could use that as your isolation mechanism within Virtual Machine Manager depending on if you wanted them isolated or you wanted a community where they could all talk to each other and those types of things. But, network and virtualization gives you a better advantage as far as combining these things together. So, getting rid of PVLANs and VLANs and using just straight network virtualization. Secondary VLANs Isolated Community Network Virtualization © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
27
No Isolation Benefits Provides direct access to the logical network
11/12/2018 No Isolation Benefits Provides direct access to the logical network VMM picks the right VLAN based on placement Upgrade to SP1 Pre-SP1 VMs have direct connectivity to the logical network by default. Direct access to infrastructure Think of the System Center in a VM scenario. Public Shared Shared internet network. If I wanted to use no isolation between these and all these virtual machines are sharing the same network and the virtual machines before SP1 of Virtual Machine Manager 2012 would just all be connected to the same logical network and they would all just have the same network address. I created a new VM network inside of there and have all those virtual machines that are connected with no isolation there and what happens is that all these virtual machines are all on the same network. With no isolation I am just basically passing everything through to the physical adaptor. It is on the network the same as everything else. I am not using any type of network virtualization that helps in that it makes things very simple, but what it does do is make things confusing in that if I had a blue network and I had an orange network they could not both be on the same network without being able to see the traffic between each other because they had direct access to the infrastructure. This would be something that I would use if I only had one type of organization that was using these virtual machines and I didn’t have a need for that isolation. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
28
Network Recommendations
TechReady 16 11/12/2018 Network Recommendations Infrastructure networks VLAN or No isolation Load balancer back end and internet facing PVLAN If you look at it, if I am looking at my environment and I see that I am just managing infrastructure VMs where I don’t have these multiple different tenants, then maybe VLANs or no isolation is fine. That is all that I need because I don’t need to add the complexity of adding network virtualization when everything is sharing the same IP range and my virtual machines basically can share the same IP range as the physical hosts that are out there or other physical hosts on the network. If I have a service type environment and I have a load balancer on there or maybe I have an internet facing environment and I want to add that extra layer of protection, that extra isolation between the different systems, so even if I have multiple systems sharing the same VLAN, but I want them to be separated as far as they can’t talk to each other without going through a router or something like that, then PVLAN would be the solution there. If I have multiple tenants, especially when I start getting into many of these tenants and I have multiple needs for isolation across many different hosts, network virtualization will decrease the complexity in that it will simplify my management and I will be able to pass these network virtualized environments across all of these different Hyper-V hosts that are out there so for a hoster or for an enterprise company or any company that wants to basically treat their customers as tenants than network virtualization would be the solution there. There, you just need to worry about, you know, what network environment I have, whether it is IPv4 or IPv6 and those types of things and what they need to connect to out in the physical world. Tenant networks Network virtualization © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
29
Address Spaces Size based on broadcasts and address utilization
11/12/2018 Address Spaces Size based on broadcasts and address utilization Can be DHCP and Static IPv4 and IPv6 Logical network Address space defined by Example Corp Corp IT /16 Internet ICANN /24 Management Datacenter Admin /24 Net. Virt. Provider /24 Cluster/Storage/etc… /24 Tenant N Tenant /24 That brings up considerations like address faces and what matters or is needed there. Well, if is was building a network environment and I had, say you know, as an organization I get a few IP addresses or I get a set of IP addresses. I have my internet facing IP addresses and those are ones that I get from the outside world. Then, I also have my internal network adaptors and what a lot of organization will do is they will create a few infrastructure spaces that they will use for that underlying logical network environment and maybe they will have a network for management. They will have a network for their network virtualization provider addresses to be sitting upon. They will have a network for their backend clustering, storage network, iSCSI, whatever. Those will be managed independently of each of the isolated tenant networks. An example here is, you know, I could use my 10. ranges for my management, my infrastructure machines and I could use the 192 range for my tenants. It really all depends on how many virtual machines you are going to be running within those tenants and whether you want to use a address or you want to use a or you want to use a 10. network for that. But, what we are really talking about here is that my managed network environment for the management, you know, talking to VMM and VMM talking to the Hyper-V hosts , those types of things doesn’t have to be on the same network that I am using as my infrastructure for network virtualization and those provider set of addresses which doesn’t have to be the same network environment that I am using for my backend storage, my live migration and those types of things. I can create separate network environments and it might be easier that way because then I don’t have to worry about my VMM talking on the same network in the same traffic that my infrastructure is talking. I can keep them there, or I can separate them out. With VMM they can be static or DHCB addresses, it doesn’t matter. We will help you connect with them. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
30
VM Networks and Network Virtualization
System Center Marketing 11/12/2018 VM Networks and Network Virtualization Connectivity Capability Multi-tenancy Isolation Bring your own IP Mobility Quality of Service (QoS) Security Optimizations Monitors With 2012 SP1 we introduced this thing called a VM network. We also talk about this thing called the logical switch. Think about it this way, the VM network defines connectivity so it basically defines, if I am inside that virtual machine and I am talking out to the physical world I am going to see the network as exposed to me by that VM network. With the logical switch that basically assigns connectivity. What a logical switch does for me is it assigns me what capabilities I have when I am talking out to the physical world. In other words, maybe I want to setup network quality of service. I want to set up a bandwidth maximum or a guaranteed minimum or something like that inside of the network. Well, I would assign that at the logical switch through a port profile and then I would attach that to a particular virtual machine. So, logical switches define capability, what kind of a quality of service I am going to set. What kind of optimizations? What kind of network extensions I am going to attach. Those are all done through the logical switch whereas VM networks show me what the network looks like from the VMs point of view. The VM looking down sees a VM network that has a particular IP range and that is the thing that it sees. VM networks Logical switch © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 30
31
Connectivity through VM Networks
System Center Marketing 11/12/2018 Connectivity through VM Networks Owner Delegation to Application Administrator User Role Self service creation by Tenant Admin user role Multi-tenancy Degrees of Isolation No Isolation Network virtualization VLAN External Bring your own IP Enabled by network virtualization Tenant/Customer IP address space separate from Provider IP address space So, we look at connectivity through VM networks. What we are really focusing on here is things like multi- tenancy. I can create multiple different VM networks for different groups. Each of these VM networks has an owner, so someone who has created that VM network. I can then allow either the tenant admin to create that VM network or I can give it creation by the administrator of the environment as well, but what a VM network allows me to do, by creating a VM network I can focus on the ability to, as a tenant of an organization say, okay this is what my IP range is going to be. This is the VM network I want to create so when I need to play virtual machines they have the right IP schema and then if I need to get out to the physical world it goes through the gateway and attaches to the right thing. This allows an organization to basically bring their own IP schema to either a service provider or, let’s say a business unit has you know like sales in a company, has a particular IP range or maybe, you know, through acquisition I have another company that required them. They have their own IP configuration over there. Well, I want to bring their infrastructure and combine it with mine, but I don’t want to have to change all their different old IP addresses. I can bring that network into my network environment through the use of VM networks. This allows me leveraging thing like network virtualization to create my own little multi-tenants environment that is isolated from each other and use that to be able to have the virtual machines talk out and give the IPs as if they were accessing the real world, whether they are using things like network virtualization or VLANs or whatever underneath. VM Mobility vNICs only connect to VM networks VM networks are built on logical networks VM networks span clouds With NV, IP follows VM migration © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
32
System Center Marketing
11/12/2018 Logical Switch Security DHCP Guard, Router Guard MAC spoofing Guest teaming, IEEE priority tagging Assign Port Profiles to Logical Switches (External) & VMs Assign VMs to Port Profiles Provided by Hyper-V extensible virtual switch and extensions switch extension manager Defines how a network adapter is able to use its connection Quality of Service Minimum/Maximum throughput Relative weight The logical switch here defines things like the security. Within Hyper-V 2012 I can support things like DHCB Guard or Router Guard. I can turn those things on at the logical switch level and any logical switch that is created from this basically logical switch template, any Hyper-V virtual switch on a Hyper-V system that is created from this logical switch definition will have those things turned on for it. I can set up quality of service. I can set up things like throughput or relative weight of this virtual machine environment as opposed to other virtual machines through the use of port profiles, and if I needed to turn on things like offloading access to the machines or I want to set up teamed environments or those types of things, I do that through the logical switch template. Optimizations SR-IOV IPsec task offloading Virtual machine queue © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 32
33
Single Root IO Virtualization (SR-IOV)
11/12/2018 Single Root IO Virtualization (SR-IOV) Benefits Virtual switch bypass for high performance workloads Limitations You need bandwidth controls Physical adapters don’t support it Limited number of VMs that can use it per host You lose the capabilities of the vSwitch One of the things that we also do support inside of VMM is this single root IO virtualization or SRIOV. What this allows me to do is it allows me to bypass the virtual switch for high performance workload. What this means is that I can just, if I know that I am going to be using a lot of traffic I can basically dedicate this network adaptor to this virtual switch and basically we will just offload a bunch of stuff directly onto the network adaptor and get the best throughput we can get for it, but it is going to limit the amount of VMs that can support it so I would only do this for a specific set of adaptors and only for a specific set of VMs that are using that and I lose all the other capabilities because we are bypassing the virtual switch so I am going to lose all those things. But, if I know I need it for a particular set of virtual machines I can turn it on and enable it. Must be enabled when virtual switch is created Must be enabled as needed on port profile Limited support for intelligent placement © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
34
Logical Switch Definitions
System Center Marketing 11/12/2018 Logical Switch Definitions Port Profile Sets Logical Switch “Building 44 Prod” Native Switch Settings Extension1 Extension2 Extension3 “DB” classification “Web” classification “iSCSI” classification “ContosoDB” (Virtual) Ext 1: Virtual Profile A Ext 2: Virtual Profile B Ext 3: Virtual Profile C Native Virtual Profile A “ContosoWeb” (Virtual) Ext 1: Virtual Profile A Ext 2: Virtual Profile D Ext 3: Virtual Profile E Native Virtual Profile A “ContosoiSCSI” (Virtual) Ext 1: Virtual Profile A Ext 2: Virtual Profile F Ext 3: Virtual Profile G Native Virtual Profile A “ContosoTeam” (Uplink) Ext 1: Uplink Profile A Ext 2: Uplink Profile B Ext 3: Uplink Profile C Native Virtual Profile C Logical Switch “Building 27 Dev” Native Switch Settings “NativeDB” (Virtual) Native Virtual Profile B “NativeWeb” (Virtual) “NativeiSCSI” (Virtual) “NativeTeam” (Uplink) Native Virtual Profile D “DB” classification “Web” classification “iSCSI” classification Within Logical Switches you will create Port Profiles that determine what capabilities that can be used by VMs connected to this switch as well as how the connection to the Physical adapters are used. You can create different profiles for different classes of applications attaching multiple classifications to the Logical Switch, as well as create multiple Logical switches for different locations or groups of servers or network locations. Within Profiles you can add to add different Virtual Profiles like: Security DHCP Guard, Router Guard MAC spoofing Guest teaming, IEEE priority tagging Quality of Service Minimum/Maximum throughput Relative weight Optimizations SR-IOV IPsec task offloading Virtual machine queue (Transcription Starts Here) We talked about this logical switch definition and we really want to understand what that means. Well, think of it this way; maybe I have a couple of datacenters. Maybe it is a couple of floors in 2 different buildings or something like that. In this example I have created a logical switch for my building 44. That logical switch has a definition as to configuration of that Hyper-V virtual switch that I am going to create and any extensions that I might want to add like the Cisco Nexus 1000v or whatever other extensions that are out there that I may want to create like the one from 5-9 or whatever that is out there. But, it also has information about the virtual machines that are going to be running on there so I am also going to be able to create these port profiles for the different virtualized environments. Maybe I have a specific port profile for my database environments. I want to give them a network bandwidth quality of service of a guaranteed minimum of 30 megabits a second no matter what and that type of thing so I can create those types of environments inside of there. I may have a web environment where I want to cap the limit so it doesn’t take over my network of like 100 megabits a second or something like that. I can do those types of things. I also create a port profile for the uplinks. Do I want team adaptors and if I want team adaptors what does it look like? Do I want to use offloading and those types of things. So I create these port profile sets and I create a classification for each of the port profiles. Now when I create a service template I assign a classification to that service template and that allows me then to have the ability to deploy this service template across multiple different datacenters where the different physical devices might have different capabilities. So, in building 44 I have these really brand new servers that all these wiz-bang new features and everything inside of them. Well, my classifications here support these different profile sets which have these different limitations, maybe these are all 10 gig adaptors and so I can really bump up the guaranteed minimums or crank up that maximum. In my old development environment I have the same need for that service to be deployed, but maybe they are older servers and they do not have as much capability. Here, for that same classification I might attach a different port profile to that logical switch so that in the production I was able to get, you know, 10 gigabits of traffic wherein this environment I only get 1 gigabit of traffic so I need to limit that maximum to 100 meg instead of 2 gigs of traffic or something like that so I can build those types of things. Maybe here instead of using a teamed environment, I don’t team it, or something like that. That is how I assign and create these logical switch definitions and it gives me the flexibility to create multiple different common logical switches with just different types of connections to the classifications so when I create those virtual machines they are going to use the right profile depending on the physical servers that they are running on. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
35
How the Logical Switch Works
TechReady 16 11/12/2018 How the Logical Switch Works Hyper-V host #1 Mgmt Virtual Switch ! Logical Switch Switch settings Switch settings Hyper-V host #2 Port Profiles (Uplink) Port Profiles (Virtual) How does this all work? Well, I have these different Hyper-V hosts and those have physical adaptors, 1 or more, that are attached to a physical switch. I have created a logical switch definition that has specific port profiles, has specific settings, has uplink port profiles and all this configuration assigned to it. What is going to happen is that on this Hyper-V host, let’s say Hyper-V host 1, I am going to go into the configuration of that Hyper-V host and I am going to look at the adaptors that are there and I am going to say create a new virtual switch. I am going to take that logical switch definition and I am going to choose the adaptor inside of that virtual switch that I want to manage and I am going to say, which logical switch do I want to add to it? So, I am going to say that this port profile that I am going to use for the uplink piece is this is going to be a management port profile that it is going to use via the uplink, these virtual port profiles are available to it so when I create new virtual machines they can get those virtual port profiles assigned to them depending on which template I use and which virtual machines are going to be attached to it and I can have either the same logical switch definition or a different one for the other hosts, but this allows me to, with the logical switch definition when it gets applied to a host, define what I am going to do with the physical adaptors that I am attaching to this logical switch and what are the characteristics that the virtual machines that are being deployed against that logical switch have access to. What happens is, if things change, if someone makes the change or something on that switch, those things can become out of compliance. If they do become out of compliance or maybe I have made a change to my logical switch settings themselves which means now the host is out of compliance, I can then remediate that, apply those changes to the host so here we went from 1 type of port profile for a VM to a different type of port profile for a VM and once I have remediated that, then everything is back and running fine so it allows me to manage these switches more appropriately. Mgmt Virtual Switch ! Clust. Corp Mgmt Non-compliant Remediate © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
36
Network Service* Support
System Center Marketing 11/12/2018 Network Service* Support Connect to load balancer through hardware provider Assign to clouds, host groups, and logical networks Configure load balancing method and add virtual IP on service deployment F5 BIG-IP, Brocade ServerIron ADX, Citrix NetScaler, Microsoft network load balancer Load Balancers Supplies network objects and policies to VMM Applies virtual switch extensions to appropriate Hyper-V hosts Self-service users can choose port classifications based on extensions Examples: Cisco Nexus 1000v, inMon sFlow, 5nine, NEC Switch extension managers In-box NVGRE Gateway Interface and manages third-party gateway device IronNetworks F5 Arista Network virtualization gateway With 2012 R2 we added, inside of Virtual Machine Manager, this new network service support. Now, instead of having all these different separate providers, we have this one thing called a network service and when you create a network service you can decided, is it a thing like a load balancer or is it a switch extension, is it a network virtualization gateway and configure it all out. Now, instead of having to go to all these different location to configure your different types of network services you can do that all from within the network service section of VMM. This makes life a little easier as you are managing all of these different extensions that are out there. If I have an extension say like the Cisco Nexus 1000v or the InMon sFlow monitoring adaptor or a third party virtualization gateway like from our networks F5 or Arista or you are you using say our Microsoft Windows Server 2012 R2 provided inbox NVGRE gateway. You can manage them from within the same tab basically within VMM. * New in System Center 2012 R2 Virtual Machine Manager © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. 36
37
The Inbox Gateway* Highly available Site-to-site VPN
Network address translation (NAT) Forwarding Border gateway protocol (BGP) Multi-tenancy Gateway Deploy host Create cluster Deploy gateway VMs from provided service template Add gateway to VMM Ready to use Provisioning process through VMM The other thing we provided with Windows Server 2012 R2 is this new inbox gateway. So, now Windows Server 2012 R2 can be the gateway for taking those network virtualized packets and moving them out to the physical world. What this means is that if I have 2 different tenants and those 2 different tenants are talking just VM to VM everything is fine within their tenant environment. So like we had the web server and the SQL server, they could talk to each other or in the orange network the web server and the SQL server could talk to each other and they couldn’t talk from the orange network to the blue network or the blue network to the orange network. What happens is that blue network, they can talk to each other, but they cannot talk out to the physical world so to talk out to the physical world I have to have a gateway. That gateway can be one of the ones provided by our partners, a physical device, or it can be an inbox gateway that we support with Windows Server 2012 R2. VMM now provides within Virtual Machine Manager the ability to create that network virtualization gateway. We do it in one of two ways; with the preview release, we came out with a template for a single server Hyper-V host that we can manage that can be that virtual gateway device. It has a bunch of VMs installed on it and those VMs become the gateway routing devices from the virtualized network to the physical network. With 2012 R2 with the GA version, we are going to have it not only be the single host, but it could also be a highly available host which means that now I can have a clustered environment and run this in my production environment to support getting those packets from the virtualized environment out to the physical world. It gets it out to the physical world in one of a few ways. It can use NAT where it just puts it out on the network via network address translation. It could use a site to site VPN; so if I had a network at my blue environment in the physical world and I create a site to site VPN with this gateway then the blue VMs would be able to talk to it and then be able to route packets back and forth and they would be on the network just the same as anything else. It also supports border gateway protocol. This is key for multi-tenancy. * New in System Center 2012 R2 Virtual Machine Manager
38
Network Virtualization Partner Ecosystem
TechReady 16 11/12/2018 Network Virtualization Partner Ecosystem We have a lot of partners that are supporting VMM and the things like the Hyper-V switch extensions, the inbox or the gateway adaptors and those types of things and load balancers and such. So, we are working with a lot of these partners to allow you to extend your environment and if you attach one of these switch extensions to a logical switch definition, when you apply that to these 10 Hyper-V hosts it will apply it to all those hosts, copy the right files down and get it all configured and setup and ready to go for you. © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
39
Network Virtualization Gateway
Bridge Between VM Networks & Physical Networks Contoso Fabrikam Multi-tenant VPN gateway in Windows Server 2012 R2 Preview Integral multitenant edge gateway for seamless connectivity Guest clustering for high availability BGP for dynamic routes update Encapsulates & De-encapsulates NVGRE packets Multitenant aware NAT for Internet access Resilient HNV Gateway Resilient HNV Gateway Internet Resilient HNV Gateway If we look at what that really means, it really means this. Now I can bridge between the virtual machines running on my Hyper-V host whether it is a service provider or it is running in my centralized datacenter to the different networks that are out there. In this example here, my service provider underneath has all these different virtual machines. Some are with the orange or red network over her at FabriKam and some are with the blue network over at Contoso and that Hyper-V network virtualization will set up the connection between those 2 environments and will allow you to create virtual machines that can then talk to the physical servers that are out there or the physical internet, depending on how you have that setup and configurated. The gateway will do the encapsulation and the de-encapsulation of those NVGRE packets and it will do the routing between this site and whatever site you want to access, whether it be via NAT via site to site VPN or BGP. Service Provider Hyper-V Host Hyper-V Host
40
Address Pools IP pools MAC pools Virtual IP pools
VM network and logical network pools Assigned to VMs, hosts, and virtual IPs Specified use in VM template creation Checked out at VM creation— assigns static IP in VM Returned on VM deletion IP pools MAC pools Assigned to VMs Specified use in VM template creation Checked out at VM creation— assigned before VM boot Returned on VM deletion Virtual IP pools Assigned to service tiers that use a load balancer Reserved within IP pools Assigned to clouds Checked out at service deployment Returned on service deletion To do all this you need to have, and we have created this thing, called an address pool. The address pools allow me to do things like create a set of IP addresses within a range aside to VMM and then as I start deploying virtual machines VMM will just take the next IP address available, give it to that VM and get it up and running. This is good for organizations because now what I can do is I can get virtual machines that are up there with static IPs, but I don’t have to physically assign each of those IPs to the different VMs when I do that; I can just basically have it go. So, we have the IP pools, the MAC pools and also virtual IP pools for those things like my load balancers in those service tiers. The address pool is an important concept inside the VM network range so that I can ensure the VM networks have the ability to assign the right IP addresses to the different VMs on their tenants.
41
Steps to Configuring Network Virtualization
TechReady 16 11/12/2018 Steps to Configuring Network Virtualization Provider Network Check Allow Network Virtualization Add Network Site with provider subnet Create IP Pools for the Provider Addresses Enable “Windows Network Virtualization” filter Select Network Site from provider network Add uplink port profile Optionally add virtual port profiles (can be done later) In Host properties-> Virtual Switches Add New Logical Switch Select adapter for provider network Select uplink port profile Add New Virtual Adapter to logical switch. Only if adapter is also used for management. Check the Host Management checkbox Create IP Pools for the Consumer Address Step 1. Create Logical Network Step 2. Create Native Port Profile - Uplink Step 3. Create Logical Switch The last thing is, if I was setting up network virtualization and I was setting up this network virtualization inside of just the virtual machines, this is before setting up all the gateway stuff to get it out to the real world, but I just wanted to set it up internally. The first thing I would do is I would create a provider network. So, I create a logical network for the provider addresses for network virtualization to be created. When I create this provider network basically things I have to remember are, when I create the logical network check that allow network virtualization is turned on. I want to add a network site within that provider subnet so that I can allocate IP addresses for the provider addresses that are going to be used and basically the one provider address per VM network per host that these virtual machines are going to be running upon. Then, create IP pools within that network site for the provider addresses I am going to use. The next thing I am going to do is I am going to create a native port profile for the uplink. This network port profile I need to ensure that Windows Network Virtualization filters turned on and I want to select the network site that the provider network that I have created for that provider network for this uplink profile. After I do that, I create the logical switch definition. I add the uplink port profile if I want to, inside that logical switch definition, create virtual port profiles. I can do that as well. I can do it now or I can do it later. If I do it now, basically I can set things like quality of service, whether it is using DHCP Guard and those types of things. Next, I want to apply that logical switch definition to a host. So, that logical switch definition I go into the host properties and virtual switches. I add a new logical switch. I select the physical adaptor or multiple adaptors I want to use for that switch if I am going to team it and I select the uplink port profile I am going to use and then, the other thing is if this adaptor is also the same adaptor that I am going to be using for management I have to make sure that I add a new virtual adaptor for that management network as well and make sure that I check host management is applied and then, once all that is done on the physical host, then I need to create the VM networks that the tenants are going to be used. Once I create those VM networks that the tenants are going to be used, then I can put those virtual machines on those IP pool ranges for those tenants. That is how I configure it without doing the gateway and then after that you will add the gateway and that will handle the routing out to the physical world for that piece of it. Step 4. Apply Logical Switch to Host Step 5. Create VM Networks © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
42
Teamed Adapters Three basic patterns for configuration
TechReady 16 11/12/2018 Teamed Adapters Three basic patterns for configuration Non-converged 1GbE 10GbE HBA/ Storage Live Migration Cluster Manage VM1 VMN Converged 10GbE each VMN VM1 Storage Live Migration Cluster Manage Converged with RDMA VMN VM1 Storage/LM/Cluster Management RDMA 10GbE each 10GbE each Within VMM we support 3 different basic patterns for teamed adaptors. I can have non-converged adaptors where I have different physical adaptors that are out there and I have different adaptors assigned to different virtual machines. I can combine these things together and have these multiple physical adaptors teamed for each of these different types of networking environments so I can have my storage networks teams in one environment, my live migration networks teamed in another one, clustered team managed exactly and team them all together. The next thing I could do is I could consolidate a lot of those by consolidating my management, my live migration clustered storage network environments onto a particular team and I could even isolate my VMs off to a different team or, you know, technically there is no reason why I couldn’t have all of these shared on one set of team or something like that. The last one is, I could converge it with something like RDMA so I can get much, much better performance through these environments. These are different ways of teaming my adaptors together and VMM supports all this different team type approaches which basically means its up to you. What do you want to do? How do you want to configure these things out there? You can set up your teamed environments either way. © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
43
Hybrid Cloud Connectivity – WS2012
S2S tunnel S2S tunnel One site-to-site gateway per tenant Limited routing capability Manual provisioning Internet connectivity back to remote site Contoso Site 1 Contoso VM Network Contoso Site 2 S2S tunnel Northwind VM Network Northwind S2S tunnel S2S tunnel Fabrikam Site 1 Fabrikam VM Network Fabrikam Site 2 Internet Hoster
44
Hybrid Cloud Connectivity – WS2012 R2
S2S tunnel BGP Multitenant site-to-site network virtualization GW Clustering for high availability BGP for dynamic routing Multitenant NAT for internet access Contoso Site 1 S2S tunnel Contoso VM Network Contoso Site 2 S2S tunnel S2S tunnel Northwind VM Network S2S tunnel Northwind The last thing we want to do is talk about how we handle network connectivity to, say, a different environment. Let’s say I had a bunch of different companies and I want to connect to a hoster or if I was an IT environment and I wanted to kind of treat my environment as a hosted environment, I could have all these different networks out there and these different companies and I would have to create a site to site gateway per tenant. That would mean that if I had 100 tenants I would have 100 different site to site gateways and that was just starting to get too extreme for our customers. With Windows Server 2012 we have really decreased how this thing works together where I can now take this same multi-tenant environment, but this gateway that we have created with Windows Server 2012 R2 can support multi-tenants routing through it. Now it means that I have a much more compressed and simply managed environment for this multi-tenant environment. I can have people use NAT if I want to. I can create these different site to site tunnels; all of them going through the same network gateway device so that it gets out and routed to the correct virtualization environment. The one thing we did add when we added this gateway is that we made this thing the ability to be highly available so it is going to be clustered for high availability so now I can support all these different customers knowing that if I lost one of these gateway Hyper-V hosts, the other hosts would be up and running and continue to support that. We also support BGP for dynamic routing across the different environments so that if, you know, through the one network environment it wants to access one piece of it, it can do that, but if it wants to access a different site it can do that as well. Then you can also create these multi-tenant access so that the virtual machine can just be out to the physical network the same way it would be, conceivably like you are at your home with your cable modem router or something like that, that type of connection. Fabrikam VM Network Fabrikam Site 1 Fabrikam Site 2 Internet Hoster
45
11/12/2018 Demo Before we end this video today, what I want to do is I want to show you a little bit about the network virtualization inside one last demo just to step through this really quickly. Now I am going to pop over to the other demo here and at this point what we are going to do now is we have talked about storage, we have talked about networking and we talked about the compute management. Now what we are going to do is show you some of the things that the networking has set up for us and how these things work together. If we go back to the environment here that we have been showing you earlier today, we had the storage and the compute that we have talked about. Now we are going to focus on the networking. Inside the networking we talked about this thing called logical networks. A logical network, especially one like this infrastructure network here is, this would be a logical network that I have created that is used basically for my management environment. Then I have these tenant provider IPs and these tenant provider IPs was that second network that I created for the provider addresses for network virtualization . I have them on their own unique subnet and I have created a network site and a pool for them and the first thing we were supposed to do is create a logical network with the network site for my provider addresses so I created that right here. The next thing that we do is, we create some port profiles. The port profiles we have 2 different types. We have both the virtual port profiles which are for the VMs that are inside of there. We have the Management port profiles. So, I have created this provide uplink and this is the port profile that defines the definition for my uplink. A couple of things here is if I wanted to set a particular load balancing algorithm I can do that. Do I want this to be teamed and if it is teamed, how it is teamed. I can set that. When I am creating the port profiles when I would turn on the network virtualization and enable Hyper-V network virtualization here and which IPs am I going to use? Which network sites am I going to use? I am using the tenant provider IPs inside of here so I have created the network site called tenant provider IPs and I check that to be assigned to this port profile. Once I have done both of those, I can create one of these guest virtual port profiles as well and so here I have one here which has offload data settings. I can turn them on. Security settings I want DHCP guard or router guard or anything like that set and bandwidth. Do I want to have, you know, a maximum bandwidth set? A minimum bandwidth set? How important is this compared to other ones? Well, if this one is 5 and everything else is one than it is going to get access to network more and faster than the other ones are going to get. I can set all those things inside of the port profile for a virtual machine. Once I have done all that then I look at the logical switches and inside the logical switches this is where I can create the switch definitions for the provider addresses and the external addresses and all those things. This is provider switch. Which extensions do I want to use if I had other extension I could attach them in here. With Window Server 2012 R2 we changed how we do extensions so now things like your Nexus 1000v can also be attached to a logical switch or a virtual switch that is doing network virtualization. Which port profile do I use for the uplink? Do I want it teamed or not teamed? Here I say, don’t team it. Then, for the port profiles, which ones do I want to allocate to the switch? This could have high bandwidth, low bandwidth and medium bandwidth. I can make all those settings and configurations. Once I have done that, then I can attach this to a host. I don’t do that here, but instead I do that for the individual servers themselves. So, I have this tenant host here called the VS12 primary. If I look at the properties of that host under virtual switches we see that I have a logical switch that is assigned too it and I have two different logical switches that are here. One logical switch is for the management and that has that management adaptor; remember when I said if you are using this logical switch for both provider addresses in management you have to have both of those things? Then I have a provider logical switch as well and it is using this particular adaptor and it is using this particular provider for it. This is where I would specify that inside of there. Once I have done all that, then I can go under VMs and services and create my different VM networks. As you can see here, I have both the finance VM network and a sales VM network and even though they both have the same IP subnet ranges they are going to be totally isolated from one another. That quickly walks you through some of the configuration you get to look forward to inside of managing VMMs. The last thing I want to show you is the service template that we have for the network virtualization gateway. In this example here I am going to show you that service template for Windows Server 2012, but it is not the highly available service template, but instead it is the template that is setup for just a single template. As you can see, this template gets created where it has a virtual hard disk attached to it and it has 3 different network adaptors attached to it. One is attached to my infrastructure. That one is used for the provider addresses. One is attached to the external world, that is the one that is going to be maxed out and the packets are going to be transferred out to the physical world. One is the one that we are using for routing. Basically, the NVGRE packets will come in through this network, passed to the VM, get routed through NIC 3, to NIC 2 and then out to the physical world. We need these 3 NICs and the 1 NIC is not connected to anything, it is just used for routing to get everything through from the virtual machines that are isolated out to the physical world. What you basically do is you have a physical host that is managed solely for network virtualization. If I go into the fabric and we look under infrastructure I have this WS12 gateway host and that gateway host is only designed for network virtualization and it is the only one that has that virtual machine service that we just showed you, if you look under here it has that gateway VM which is that service template that we created running inside of there. That is to just use for managing that gateway out to the physical world. The VMs that are running inside of this tenant, like this sales app here would route the packets to other VMs that are running inside of here. If they need to go out to the physical world it would go out through that WS12 gateway virtual machine that is all inside of there. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
46
Agenda Introduction Deploy Compute, Storage, and Networking
11/12/2018 Agenda Introduction Deploy Compute, Storage, and Networking Constructing the Private Cloud Manage Across Clouds Day to Day Operations Architecture Reference That is the end of this demo right here. Now what I will do is flip back to the presentation. We have talked now about managing compute, managing storage, managing network resources all from within VMM and VMM allows you to deploy bare metal servers for Hyper-V servers. It allows you to deploy scale-out file servers for your clustered storage environment. Lastly, it allows you to manage and configure that logical networking and the management of your Hyper-V network virtualization, all the way through gateway configuration of taking virtual machines from inside of this network virtualization environment and routing those packets out to the physical world. We have really talked in this first video all about the compute storage and networking resources and within this compute storage and networking resources we have given you the ability to understand how VMM manages the underlying configuring fabric. In later videos we are going to talk about how to create a private cloud from this thing and then lastly the day to day operations and management of that. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
47
11/12/2018 2:09 AM © 2013 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. © 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.