Presentation is loading. Please wait.

Presentation is loading. Please wait.

What’s new in Windows Server 2008 R2?

Similar presentations


Presentation on theme: "What’s new in Windows Server 2008 R2?"— Presentation transcript:

1 What’s new in Windows Server 2008 R2?
Ravikanth Chaganti

2 About me Lead engineer, Windows Server OS group at Dell
Author of HVS2008 UI

3 Agenda Windows Server roadmap Windows Server 2008 R2 Changes in R2
Hyper-V improvements Power Management Improved Remote Management Deployment References Q & A

4 Windows Server roadmap

5 Windows Server 2008 R2 Currently in public beta phase
No 32-bit or x86 version of OS Minimum / recommended system requirements are same as Windows Server 2008 Will have the following editions Standard Enterprise Data center Web Hyper-v Server 2008 R2 (based on Standard)

6 Changes in R2 Hyper-V improvements Power Management
Live Migration and Cluster Shared Volumes (CSV) Dynamic Storage Second Level Address Translation (SLAT) Jumbo frames Virtual Machine Queues Power Management Core Parking Processor P-state support Improved Remote Management Server Manager Console PowerShell 2.0 PowerShell ISE Deployment Native VHD & VHD boot support WDS improvements to support native VHD boot There are many changes to R2 operating system. I have listed a few significant ones here There are many changes to Terminal services and Web technologies within R2 We will discuss the highlighted topics in detail through this session and briefly touch upon the remaining topics now Dynamic Storage You can now hot add/remove VHDs and pass through disks to and from a running VM. Supported only using virtual SCSI controller Jumbo Frames Jumbo Frames add the same basic performance enhancements to virtual networking. That includes up to 6 times larger payloads per packet, which improves not only overall throughput but also reduces CPU utilization for large file transfers. SLAT Uses processor features like Intel EPT (Enhanced page tables) or AMD RVI (Rapid Virtualization indexing) or NPT (Nested Page tables); Processor resources spent in translating guest memory addresses to physical and vice versa is saved Processor P-States P states are the processor performance states as per ACPI specification; R2 can adjust P-states and subsequently adjust power consumption GPO can be used to configure how power states are used

7 R2: Live migration Most highly anticipated feature in Hyper-V v2
Eliminates the limitations of Quick Migration Move running VMs from one host to another with no disruption of service Better agility Increased productivity Live migration can happen only between hosts with processors from same manufacturer Intel to Intel or AMD to AMD can happen; Intel to AMD is not possible Support Supported only on Enterprise and Datacenter editions Hyper-V Server 2008 R2 will have support for Live migration Supports up to 16 nodes per cluster Quick migration involved down time because of the way it implemented moving VMs from one host to another. In QM, VM is first paused and then moved to the destination and resumed there. This caused up to 6mins of downtime in a few scenarios Each disk is owned by a single node at any given point in time Better Agility You can move VMs based on the work load scenarios, available resources and achieve optimal consolidation Using SCVMM 2008 R2, you can policies created for moving VMs around automatically and transparent to the end user Increased productivity Scheduled maintenance of physical hosts can be done without really impacting the workload inside a VM.

8 R2: Live migration How it works? (animation)
You can use failover cluster manager to initiate VM live migration SCVMM 2008 R2 is another option; and of course, you can script the migration process using WMI or PowerShell This slide shows various steps that take place when migrating a VM from One host to another Step1 A TCP connection is created between source and destination hosts Configuration data is moved to the destination host; essentially creating a VM instance (like a skeleton VM) on the destination host and memory is allocated for this VM Step2 Memory contents of the migrating VM are copied over to the destination host. This memory is divided in to pages of 4K in size. There will be dirty bitmap created for these memory pages. Source host monitors these memory pages for any changes as the VM is still running. These new modified pages will be marked dirty again in the bitmap. Hyper-v iterates through these modified pages again and moves them to the destination host. However, this has to stop at some time; otherwise we will be in a loop. The number of such iterations is limited to 10 Step3 Put VM in to a saved state; All remaining modified pages are transferred; registers and device state is copied to the destination host. This is where network bandwidth plays a critical role. Hence, 1Gig ethernet is recommended. If the VM was under heavy load when live migration was initiated, this step might take longer because there will be more number of modified pages. Now, the VM on destination host is up-to-date; The goal of this step is to complete in less than 20ms. Otherwise, there will be TCP connection timeout and clients connected the VM may see the disruption of service Step4 Transfer storage ownership to the destination host Step5 Resume the VM on the destination host

9 R2: Cluster Shared Volumes
Live migration requires shared storage and cluster shared volumes (CSV) enable this You can enable this feature from failover cluster manager console CSV provides a single consistent file name space Files have the same name and path when viewed from any node in the cluster CSV volumes are exposed as directories and subdirectories under the “ClusterStorage” root directory C:\ClusterStorage\Volume1\<root> C:\ClusterStorage\Volume2\<root> C:\ClusterStorage\Volume3\<root> Failover Clustering has implemented a “shared nothing” storage model for the last decade Each Disk is owned by a single node at any one time, and only that node can perform I/O to it

10 Concurrent access to a single file system
R2: CSV Disk5 Single Volume VHD SAN Concurrent access to a single file system All physical hosts have access to the same shared physical LUN. Each VM stores its VHD on the shared storage and hence eliminating the need for LUN ownership changes

11 Virtual Machine Switch
R2: VMQ Parent Partition VM1 VM2 Ethernet VM BUS TCP/IP VM NIC 1 VM NIC 2 Virtual Machine Switch NIC Miniport Driver Routing VLAN Filtering Data Copy Port 1 Port 2 Receive path VM Switch has to parse incoming packets and route them to a proper VM based on MAC address VLAN ID Copying data from parent partition address space to the child partition Then interrupt the guest to take the data and guest will copy the data to its own address space Too many copy operations Send Path Copying data from child partition address space to the parent partition address space MAC address lookup and VLAN ID filtering Parent/child context switch overhead Simulating task offload in software for VM to VM traffic Extra copy for VM to VM traffic Virtual Network data path without VMQ

12 Virtual Machine Switch
R2: VMQ Parent Partition VM1 VM2 Ethernet VM BUS TCP/IP VM NIC 1 VM NIC 2 Virtual Machine Switch Miniport Driver Switch/Routing unit Q1 Default Queue Q2 Routing VLAN filtering Data Copy Port 1 Port 2 NIC Need a separate NIC that support VMDq. Something like Intel Quad Port Intel –VT 1Gig adapter or Dual Port 10Gig Intel-VT adapter Enhances the overall network throughput and reduces CPU cycles spent in handling VM network traffic Virtual Network data path with VMQ

13 R2: Core Parking Without core parking,
When the system is working at even 10-20% of overall capacity, all processor cores are active. They may be using DBS or PowerNow to reduce the frequency of the core but that does not result in huge power savings. Core parking consolidates processing onto the fewest number of possible processor cores, and suspends inactive processor cores hence resulting in better power savings

14 R2: Native VHD support Natively supports creating / managing VHDs
Native support all VHD types Fixed Dynamic Differencing Diskpart.exe or diskmgmt.msc can be used to create VHDs No CIMv2 WMI interfaces; you must have Hyper-v role enabled for using WMI to create VHDs For creating a differencing disk you need to use diskpart.exe only

15 R2: Native VHD boot Supports booting from a VHD “without” any hypervisor software Can be used to create multi-boot configuration without “really” installing OS Improved WDS to enable native VHD boot deployment Benefits Single image Same image format for WDS, unattended and VM deployments Inline with single file format efforts within MS

16 References Beta build download - Beta reviewers guide -


Download ppt "What’s new in Windows Server 2008 R2?"

Similar presentations


Ads by Google