Download presentation
Presentation is loading. Please wait.
Published byAshlie Haynes Modified over 9 years ago
1
Hyper-V Performance, Scale & Architecture Changes Benjamin Armstrong Senior Program Manager Lead Microsoft Corporation VIR413
2
Windows Server 2008Windows Server 2008 R2Windows Server 2012 HW Logical Processor Support 16 LPs64 LPs320 LPs Physical Memory Support1 TB 4 TB Cluster Scale16 Nodes up to 1000 VMs 64 Nodes up to 8000 VMs Virtual Machine Processor Support Up to 4 VPs Up to 64 VPs VM MemoryUp to 64 GB Up to 1TB Live MigrationYes, one at a time Yes, with no limits. As many as hardware will allow. Live Storage MigrationNo. Quick Storage Migration via SCVMM Yes, with no limits. As many as hardware will allow. Servers in a Cluster16 64 VP:LP Ratio8:18:1 for Server 12:1 for Client (VDI) No limits. As many as hardware will allow.
3
Agenda Supporting 320 LPs
6
Hypervisor counters correctly show: The total number of system LPs The number of parent VPs
7
MSINFO32, WMI, POWERSHELL Name : Genuine Intel(R) CPU @ 1.80GHz Description : Intel64 Family 6 Model 58 Stepping 2 NumberOfCores : 2 NumberOfLogicalProcessors : 2
8
Agenda 64 Virtual Processors in a VM
10
Core Hypervisor VP
11
Core Hypervisor VP
12
Core Hypervisor VP
13
Core Hypervisor VP
14
Core Hypervisor VP
15
Core Hypervisor VP
16
Core Hypervisor VP
18
Core Hypervisor VP
19
Core Hypervisor VP
20
Core Hypervisor VP
21
Core Hypervisor VP
22
Core Hypervisor VP
23
Core Hypervisor VP
24
Core Hypervisor VP
27
Agenda 1TB of memory in a VM
28
NUMA (Non-uniform memory access) Helps hosts scale up the number of cores and memory access Partitions cores and memory into “nodes” Allocation and latency depends on the memory location relative to a processor High performance applications detect NUMA and minimize cross- node memory access Host NUMA Memory Processors NUMA node 1NUMA node 2
29
This is optimal… System is balanced Memory allocation and thread allocations within the same NUMA node Memory populated in each NUMA node Host NUMA Memory Processors NUMA node 1NUMA node 2 Memory Processors NUMA node 3NUMA node 4
30
This isn’t optimal… System is imbalanced Memory allocation and thread allocations across different NUMA nodes Multiple node hops NUMA Node 2 has an odd number of DUMMS NUMA Node 3 doesn’t have enough NUMA Node 4 has no local memory (worst case) Memory Processors NUMA node 1NUMA node 2 Memory Processors NUMA node 3NUMA node 4 Host NUMA
31
Guest NUMA Presenting NUMA topology within VM Guest operating systems & apps can make intelligent NUMA decisions about thread and memory allocation Guest NUMA nodes are aligned with host resources Policy driven per host – best effort, or force alignment vNUMA node AvNUMA node BvNUMA node AvNUMA node B NUMA node 1NUMA node 2NUMA node 3NUMA node 4
35
Live Migration
36
Faster I/O Agenda
37
Network I/O path with SR-IOV Network I/O path without SR-IOV Physical NIC Root Partition Hyper-V Switch Routing VLAN Filtering Data Copy Routing VLAN Filtering Data Copy Virtual Machine Virtual NIC SR-IOV Physical NIC Virtual Function
38
Virtual Machine Network Stack Software NIC Enable IOV (VM NIC Property) Virtual Function is “Assigned” Team automatically created Traffic flows through VF Turn On IOV Break Team Reassign Virtual Function Assuming resources are available Migrate as normal Live MigrationPost Migration Remove VF from VM VM has connectivity even if Switch not in IOV mode IOV physical NIC not present Different NIC vendor Different NIC firmware SR-IOV Enabling & Live Migration SR-IOV Physical NIC Physical NIC Software Switch (IOV Mode) “TEAM”“TEAM” Software NIC Virtual Function SR-IOV Physical NIC Software Switch (IOV Mode) “TEAM”“TEAM” Virtual Function Software path is not used
40
SR-IOV
41
The New Default Format for Virtual Hard Disks Larger Virtual Disks Large Sector Support Enhanced Perf Larger Block Sizes Enhanced Resiliency Embed Custom Metadata User Defined Metadata
42
Queue Depth 16 IOPS
43
Queue Depth 16 MB/S 25%
44
Virtual Storage Stack Virtual Storage Stack VM Dev IO Throughput Was Limited By 1 Channel Per VM Fixed VP For IO Interrupt Handling 256 Queue Depth/SCSI, Shared For All Attached Devices Windows Server 2012 1 Channel/16 VPs, Per SCSI 256 Queue Depth/Device, Per SCSI IO Interrupt Handling Distributed Amongst VPs Dev
45
VHD Stack
46
Token
47
ODX
48
Agenda Results
49
32 LP/VP64 LP/VP NativeHyper-VNativeHyper-V Throughput96084015891496 CPU Utilization97.4%98.6%79%86.8% Throughput Loss12.5%6% Path Length Overhead15.7%16.9%
50
VIR312: What's New in Windows Server 2012 Hyper-V, Part 1 VIR315: What's New in Windows Server 2012 Hyper-V, Part 2 VIR321: Enabling Disaster Recovery using Hyper-V Replica VIR314: WS2012 Hyper-V Live Migration and Live Storage Migration Find Me Later At @VirtualPCGuy
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.