Download presentation
Presentation is loading. Please wait.
Published byJeffrey Anderson Modified over 9 years ago
2
VIRTUALIZING MICROSOFT EXCHANGE SERVER WITH HYPER-V SESSION CODE: VIR-EXL308 Glen Gooda Senior Consultant Microsoft
3
What We Will Cover ► Why Microsoft Virtualization for Microsoft Server Applications? ► Base Hypervisor Performance ► ESG Performance Results ► Exchange Server Virtualization Best Practices ► Links to Reference Material
4
Trends – Changing Market Landscape ► Virtualization is exploding resulting in VM proliferation and impacting OS share Number of physical servers shipments used for virtualization will grow to 1.7M+ in 2012 at a CAGR of 15% 19% of physical server shipments will be used for virtualization, increasing from 11.7% in 2007 IDC Server Virtualization Forecast
5
Why Microsoft Virtualization for Microsoft Server Applications
6
Management Platform Microsoft Virtualization for Server Applications Virtualization Platform Microsoft Server Applications Business Critical Applications Enterprise Applications Line of Business (LOB) Custom Applications
7
Microsoft Virtualization: The Best Choice for Microsoft Server Applications Built for Windows Built for Windows *Built-in Virtualization with one-stop support *Large Partner Ecosystem Increased Deployment Options Complete Management Solution *Deep Application Knowledge *Physical & Virtual Management *Cross-Platform & Hypervisor Support Low Cost Complete Solution *A comparable solution can cost almost 4 times more† *Lower On-going Costs Virtualization-Friendly Licensing *Only available with Microsoft Virtualization †Based on a comparison of Microsoft® System Center Server Management Suite Datacenter with VMware® vSphere Enterprise Plus with VMware vCenter Server.. Assumes a five host configuration, 2 processors on each host, 2 years support costs for both products, and no operating system costs included.. The Microsoft solution can use either the free Microsoft Hyper-V Server 2008 R2 hypervisor or an existing Windows Server 2008 R2 hypervisor. Based on Microsoft estimated retail prices and published VMware prices available at https://www.vmware.com/vmwarestore as of 08/04/2009 for purchases in the United States. Actual reseller prices may vary.https://www.vmware.com/vmwarestore
8
Windows Server 2008 R2 SP1 ► Many of the scenarios discussed leverage features new to R2 and R2 SP1 ► Live Migration & HA ► New Processor Support – Improved Performance & Lower Costs ► Enhanced Scalability – Greater VM density & Lower TCO ► Networking enhancements – Improve performance & 10 Gb/E ready ► Dynamic VM capabilities ► Enhancements to Server Core – Ease management & Lower TCO ► Dynamic Memory & Remote FX in SP1
9
Hyper-V Configuration Guidelines ► Hyper-V Root Configuration – Plan for 1GB+ memory reserve for the management OS in the root partition – Plan for one dedicated NIC for management purposes – Plan (ideally) for one dedicated NIC for live migration – Separate LUNs/Arrays for management OS, guest OS VHDs and VM storage – Management OS and VHD LUNs should employ RAID to provide data protection and performance – Challenge for blades with 2 physical disks
10
Hyper-V Configuration Guidelines ► Hyper-V Guest Configuration – Fixed-sized VHDs for Virtual OS Need to account for page file consumption in addition to OS requirements OS VHD Size (minimum 15GB) + VM Memory Size = Minimum VHD size – Account for space needed by additional files by VM Example for SQL: OS VHD Size + (VM Memory Size) + Data Files + Log Files (c) 2011 Microsoft. All rights reserved.
11
Dynamic Memory in SP1 ► Overview – A memory management enhancement for Hyper-V – Enables customers to dynamically grow and decrease the memory of a VM – Available as a feature in Windows Server 2008 R2 SP1 ► Benefits – Better consolidation ratios with predictable performance – Enables linear scalability for deployment environments – Designed for production use – Recommendations for use for server applications still being evaluated. ► With the SP1 release, we will meet all the Gartner/Burton group requirements for an Enterprise ready Hypervisor
12
How does it work? ► VM memory configuration parameters: – Initial (what VM will boot with) – Maximum (what VM can grow to) ► Memory is pooled and dynamically distributed across VMs ► Memory is dynamically allocated/removed based VM usage with no service interruption ► Guest enlightened: guests & Hyper-V work TOGETHER ► Memory is added and removed via synthetic memory driver (memory VSC) support
13
How is it different? ► There is a difference between memory overcommit and overcommitting memory – We warned of the dangers of overcommitting memory – We pressed against the use of overcommit from a marketing perspective ► Dynamic Memory allows for predictable, consistent performance even after overcommitting memory ► Key is avoid the performance cliff which is possible with the VMware solution ► The features VMware has does NOT provide significantly more scale than Hyper-V with DM does (despite what VMware will tell our customers)
14
How does VMware do it? ► Memory Ballooning and Un-ballooning – Set VM memory, reservation, and limit – Will add and remove as necessary, between the reservation and limit ► Memory Compression ► Page Sharing – Optimized common memory pages – Affected by ASLR and SLAT ► Hypervisor Memory Paging – Pages at the Hypervisor when physical memory is completely committed ► All these (italicized) are only used after all physical memory is committed, which is not recommended by VMware or Microsoft!
15
Base Hypervisor Performance
16
Project Virtual Reality Check ► Available at www.virtualrealitycheck.netwww.virtualrealitycheck.net ► Performed by Ruben Spruijt and Jeroen van de Kamp ► Not sponsored by any one company, although VMware and Citrix have assisted the site ► Results are focused on running and replacing Terminal Server workloads only, on vSphere, Hyper-V, and XenServer
17
► Results are not for redistribution or validation, although they are public ► Phase II results from February 2010, with significant increase in vSphere performance (c) 2011 Microsoft. All rights reserved.
18
Project VRC Results ► Enable EPT/RVI results in a significant increase in capacity of VMs running TS – vSphere – 90% increase – XenServer – 95% increase – Hyper-V – 154% increase ► When scaling x86 TS VMs w/o Hyper-threading, vSphere is 5% better than both Xen and Hyper-V
19
Project VRC Results ► When scaling x86 TS VMs w/ Hyper-threading, Xen and Hyper-V are 15% better than vSphere – When scaling up to 100 TS sessions, response times for all three hypervisors are fairly equal – Beyond 100 sessions, vSphere response times increases with each new session ► When scaling x64 TS VMs, Xen and Hyper-V are within 13.6% of bare metal, and are 27% better than vSphere (c) 2011 Microsoft. All rights reserved.
20
2010 ESG Paper ► 3rd Party Performance Validation White Paper, sponsored by Microsoft ► Key Points – Hyper-V is easy to install to get running for those administrators familiar with Windows – Hyper-V provides high availability with transparent and automatic failover of Virtual Machines (VMs) – Hyper-V Live Migrations provides for zero-downtime migrations of VMs between Hyper-V servers – Most importantly, the report shows Hyper-V performance versus physical is excellent, with 95% to 99% of the performance of physical disks and 89% to 98% of performance of the tested workloads versus physical.
21
2010 ESG Paper ► All this strongly supports the position that Hyper-V performance is not a bottleneck or blocker to implementation and that customers can run almost all their workloads on Hyper-V ► Publically available and you can distribute to your customers/clients ► http://www.enterprisestrategygroup.com/2010/07/microsoft-hyper-v-r2- scalable-native-server-virtualization-for-the-enterprise/ http://www.enterprisestrategygroup.com/2010/07/microsoft-hyper-v-r2- scalable-native-server-virtualization-for-the-enterprise/ ► http://www.infostor.com/index/articles/display/5976242552/articles/infos tor/esg-lab-review/2010/july-2010/microsoft-hyper-v.html http://www.infostor.com/index/articles/display/5976242552/articles/infos tor/esg-lab-review/2010/july-2010/microsoft-hyper-v.html
22
Hyper-V Fixed Virtual Hard Disk Performance Overhead Analysis
23
Hyper-V R2 Application Overhead Analysis
24
Microsoft/Intel iSCSI Test ► Used Windows Server 2008 R2, Intel Xeon 5500 processors, and Intel 10Gbps Ethernet Adapters ► Reached over One Million IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Native HW ► Reached over 700,000 IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Hyper- V to the Guest OS
25
Microsoft/Intel iSCSI Test ► Native Performance ► In-Guest VM Performance
26
Microsoft/Intel iSCSI test ► iSCSI Performance can be as good or better than either Fibre Channel or FCoE. iSCSI can offer a compelling offering for those who don’t want to invest in FC. ► Windows Server and Hyper-V is not a limiting factor to IO performance. There shouldn’t be any significant concern around IO for virtualizing with Hyper-V. ► More details (along with a great Photoshop picture) at http://gestaltit.com/all/tech/storage/stephen/microsoft-and-intel- push-one-million-iscsi-iops/ http://gestaltit.com/all/tech/storage/stephen/microsoft-and-intel- push-one-million-iscsi-iops/
27
ESG Performance Results
28
ESG Lab Summary 2011 ► Available online: http://www.microsoft.com/virtualization/e n/us/solution-business-apps.aspx http://www.microsoft.com/virtualization/e n/us/solution-business-apps.aspx ► Useful links, resources, case studies, white papers and webcasts
29
2011 ESG Test Lab - Physical RAID-10 Pools: Data (88): Logs(16): OS(24): Apps(16): EMC CX4-960 155 15K RPM FC disk drives SharePoint Utilities Exchange SQL Server SAN 2x4 Gbps FC per server HP BL680C up to 24 cores and 128 GB RAM per blade http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
30
2011 ESG Test Lab - Virtual Hyper-V R2 Exchange SAN Exchange http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf Application: Exchange 2010 VM configuration: 4 vCPU, 4 GB RAM Microsoft Windows Server 2008 R2 SP1 Hypervisor: Microsoft Hyper-V R2 Physical OS: Windows Server 2008 R2 SP1 Virtual machine images: Fixed VHD Exchange data and logs: Fixed VHD
31
Microsoft Exchange Server Jetstress 2010 ► Scale from 5,000 to 20,000 simulated Exchange mailboxes ► 5,000 mailboxes per VM ► 250 MB mailboxes ► 0.15 IOPs per mailbox ► 2 databases/2 copies per VM http://download.microsoft.com/download/1/7/F/17FB551C-0905-4A04-AB46-2EBA616CFDF3/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20SharePoint%20Mar%2011_Wide.pdf
32
2011 Exchange Workload Results http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
33
2011 Exchange Workload Results http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
34
Exchange Tested Solutions ► Microsoft and participating server and storage vendors ► Key design decision points for common scenarios ► Cost effective resource planning (server, storage, network) ► Device Access Group and multi-site designs for HA ► Documented results of exhaustive performance tests ► 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers ► 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing Solutions ► 15000 Mailboxes in Two Sites Running Hyper-V on Unisys ES7000 Servers and Hitachi Adaptable Modular Storage 2000 Family 15000 Mailboxes in Two Sites Running Hyper-V on Unisys ES7000 Servers and Hitachi Adaptable Modular Storage 2000 Family ► 16000 Mailboxes in a Single Site Deployed on IBM and Brocade Hardware 16000 Mailboxes in a Single Site Deployed on IBM and Brocade Hardware ► 20000 Mailboxes in Two Sites Running Hyper-V on Dell R910 Servers, EMC CLARiiON Storage, and Brocade Network Solutions 20000 Mailboxes in Two Sites Running Hyper-V on Dell R910 Servers, EMC CLARiiON Storage, and Brocade Network Solutions ► 32400 Mailboxes In Three Sites Running Hyper-V on Cisco Unified Compute System Blade Servers and EMC CLARiiON Storage 32400 Mailboxes In Three Sites Running Hyper-V on Cisco Unified Compute System Blade Servers and EMC CLARiiON Storage http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
35
Exchange Tested Solutions with Hyper-V http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
36
Exchange Tested Solutions on Hyper-V SolutionMboxes Sites CoresRAM(GB)Drives Hyper-V R2 VMs MboxCAS/HubTotal 19,0002364321449918 215,00029651213612820 320,000212876816016 32 432,0003961,152432241236 1.http://technet.microsoft.com/en-us/library/gg513522.aspxhttp://technet.microsoft.com/en-us/library/gg513522.aspx 2.http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4D83E5AC-7D9A-47C6-A843-A5520D118FC4http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4D83E5AC-7D9A-47C6-A843-A5520D118FC4 3.http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6FD7E23F-30C8-4FB7-BCA3-A5DE9AA9A8E4http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6FD7E23F-30C8-4FB7-BCA3-A5DE9AA9A8E4 4.http://www.emc.com/collateral/hardware/white-papers/h7337-exchange-unified-cisco-hyper-v-wp.pdfhttp://www.emc.com/collateral/hardware/white-papers/h7337-exchange-unified-cisco-hyper-v-wp.pdf http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
37
2011 Exchange Results Summary ► 20,000 simulated Exchange 2010 users on a single server in four Hyper-V R2 VMs with Jetstress ► Exchange tested solutions from 9,000 through 32,000 mailboxes ► The 32,000 mailbox solution was deployed in 36 Hyper-V R2 VMs over three sites ► The performance, scalability and low overhead of Hyper-V R2 can be used to reduce costs and improve the manageability, flexibility, and availability of consolidated Exchange applications. http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
38
Exchange Server Virtualization Best Practices
39
Why Virtualize Exchange Take advantage of virtualization capabilities to optimize server utilization Host in Datacenter VM Consolidate under-utilised servers into a single virtualized hosts Lower costs by reducing space needs and power consumption Rapid provisioning of a mobile infrastructure 1 Exchange 2010 CAS & HUB Exchange 2010 MBX File & Print Server 2 Exchange 2010 MBX 3 Exchange 2010 CAS & HUB Management Server NLB DC 1 DC 2 Database Server Exchange 2010 UM DAG
40
Updated Support Guidance – Support for virtualized Exchange servers since Exchange Server 2007 SP1 – Exchange 2010 release continued support for virtualization – Expanding support scenarios – Release of Exchange 2010 Virtualization Guidance whitepaper Ex 2007Ex 2010 RTM Ex 2010 SP1 (Now) Any hypervisor validated under Windows SVVP All storage used by an Exchange guest must be block level storage Virtual storage must be fixed size, SCSI pass-through, or iSCSI Taking virtual snapshots of Exchange guest, not supported Virtual processor-to-logical processor ration no greater than 2:1 Exchange HA in combination with hypervisor clustering or migration Unified Messaging role supported
41
Support Guidance (continued) ► Storage – Fixed sized VHD, SCSI or iSCSI pass-through. – Dynamic or Thin Disks are not supported – Differencing or Delta Disks are not supported – Disk space should be 15GB + VM Memory Size
42
Support Guidance (continued) ► Other considerations – Only the management software in the parent partition. No other roles should be running. – VM Snapshots are not supported – 2:1 vCPU to physical CPU ratio is supported but 1:1 is best – Current guidance to not use any memory management or oversubscription (c) 2011 Microsoft. All rights reserved.
43
Deployment Recommendations ► Exchange application is not ‘virtualization aware’ ► Core Exchange Design Principles Still Apply – Design for Performance, Reliability and Capacity – Design for Usage Profiles (CAS/MBX) – Design for Message Profiles (Hub/Edge)
44
Deployment Recommendations ► Virtualization Design Principles Now Apply – Design for Performance, Reliability and Capacity – Virtual machines should be sized specific to the Exchange role (EDGE, HUB, CAS, MBX, multi-role) – Hosts should be sized to accommodate the guests that they will support (c) 2011 Microsoft. All rights reserved.
45
Best Practices: Capacity, Sizing and Performance
46
Sizing Process Overview ► Start with the physical server sizing process – Calculator & TechNet guidance ► Account for virtualization overhead ► Determine VM placement – Account for VM migration if planned ► Size root servers, storage, and network infrastructure
47
Guest Sizing Rules of Thumb ► Size Mailbox role first – CPU ratios for other roles based on Mailbox role sizing – Mailbox role performance is key to user experience – High availability design significantly impacts sizing ► Don’t oversubscribe resources – Size based on anticipated peak workload, don’t under provision physical resources ► Don’t forget network needs
48
Guest Sizing for Unified Messaging ► Newly supported for virtualization – Requires Exchange 2010 SP1 (or greater) ► Role is susceptible to poor voice quality and/or latency if undersized ► Requires min. 4 virtual processors ► UM must be able to utilize physical processors on demand ► Consider network requirements (low latency, sufficient bandwidth) to meet UM needs ► Tests show that 4VP/16GB VM can handle 40 concurrent calls with VM Preview and 65 calls without
49
Root Server Sizing ► Root server storage sizing includes space for the OS & required hypervisor components, plus connectivity to storage for guest VMs – Don’t forget about high availability of storage if required (multi- path HBAs or iSCSI NICs, redundant paths, etc.) ► Network sizing is critical: number of interfaces and bandwidth – Consider app connectivity, storage networking, heartbeats, CSV, VM migration
50
Root Server Sizing ► CPU sizing should include root needs plus per-guest overhead – Follow hypervisor vendor recommendations ► Memory sizing should not assume oversubscription – Follow hypervisor vendor recommendations – Provide memory for root plus sum of running VM requirements – Memory for Hyper-V root = the larger of 512MB or the per-VM value (summed for running VMs) of 32MB for the first 1GB of virtual RAM + 8MB for each additional GB of virtual RAM Example: 8 VMs running, each with 32GB RAM. Root requires 8 * (32MB + 8MB*31) = 2240MB
51
Virtual Processors ► Scale up CPU on VMs as much as possible – Don’t deploy 4 x 1 vCPU machines vs. 1 x 4 vCPU machine: take advantage of Exchange scalability ► Don’t oversubscribe CPUs unless consolidating with P2V, or similar scenario ► Generally assume 1 logical CPU == 1 virtual CPU, don’t assume that a hyperthreaded (SMT) CPU counts
52
Best Practices: Server Deployment
53
Locating Virtual Machines ► VM placement is important for high availability ► Don’t co-locate DAG database copies on physical hosts ► Exchange unaware of VM location relative to other VMs – No path correction in transport to avoid data loss ► Ensure peak workload can run in standard VM locations – OK to move temporarily for maintenance assuming high availability requirements are met and current workload can be serviced
54
Storage Decisions ► Exchange performance and health highly dependent on availability and performance of storage ► Many options for presentation of storage to VMs – VHD – FC – iSCSI, FCoE – DAS ► Optimize for performance and general design goals – We recommend looking for options that provide large mailboxes and low cost
55
Storage Decisions ► Exchange storage should be on spindles separate from guest OS VHD physical storage ► Exchange storage must be fixed VHD, SCSI pass-through or iSCSI – Preference is to use SCSI pass-through to host queues, DBs, and logfile streams – Hyper-V Live Migration suggests Cluster Shared Volumes with fixed VHD (faster “black-out” period) ► FC/SCSI HBAs must be configured in Root OS with LUNs presented to VMs as pass-through or VHD
56
Storage Decisions ► Internet SCSI (iSCSI) – Standard best practices for iSCSI connected storage apply (dedicated NIC, jumbo frames, offload, etc…) – iSCSI initiator in the guest is supported but need to account for reduced performance ► Exchange storage must be block-level – Network attached storage (NAS) volumes not supported (c) 2011 Microsoft. All rights reserved.
57
Exchange VM Deployment ► Exchange setup must be run when VM is provisioned – Not “sysprep friendly” ► Possible to script Exchange setup to fully automate Exchange VM provisioning ► Build “starter image” with desired OS, patches, pre-reqs, and Exchange install binaries
58
Best Practices: High Availability & VM Migration
59
High Availability And Disaster Recovery ► Exchange High Availability Definition – Automatic switch over of application services which doesn’t compromise the integrity of application data – Selection of “active” data set occurs within the application automatically ► Exchange Disaster Recovery Definition – Manual fail over of application services with high retention of data integrity – Selection of “active” data set occurs manually outside the application, Exchange application provides support to minimize data loss through replication
60
Exchange 2010 High Availability ► Database Availability Group (DAG) – A group of up to 16 Exchange Server 2010 Mailbox servers that provide automatic database-level recovery – Uses continuous log replication and a subset of Windows Failover Clustering technologies – Can extend across multiple datacenters/AD sites
61
Exchange 2010 High Availability ► Benefits of Exchange Native Data Protection – Protection from database, server or network failure – Automatic failover protection and manual switchover control is provided at the mailbox database level instead of at the server level. – Support for up to 16 copies, support for lag copies (c) 2011 Microsoft. All rights reserved.
62
Host Based Failover Clustering ► Host Based Failover Clustering HA – Using Host Based Failover Clustering and automatically failing VMs to an alternate cluster node in the event of a critical hardware issue (virtualization platform independent) ► What you need to be aware of: – Not an Exchange Aware Solution – Only protects against server hardware/network failure – No HA in the event of storage failure / data corruption – Trend is larger mailboxes = larger database sizes = longer time to recover from data loss = DAG – Requires a shared storage deployment
63
VM Migration and Exchange 2010 ► Physical Computer Maintenance – Operating System/Application Updates – Hardware Maintenance ► Rebalancing Workloads – Dynamic redistribution of VM’s to optimize workload on physical hardware ► Green IT – ‘Off Peak’ Virtual Machine Consolidation
64
VM Cluster & Migration Considerations ► Minimize “outage” during migration operations – Consider CSV rather than pass-through LUNs for all Mailbox VM storage ► Disable migration technologies that save state and migrate: always migrate live or completely shut down ► Consider relaxing cluster heartbeat timeouts – Cluster nodes considered down after 5 seconds by default ► Be aware of additional network interface requirements for VM migration technologies – size network appropriately
65
Best Practices: Coexistence With Other Workloads
66
Private Cloud Considerations ► Given fixed resource requirements, isolate Exchange within private cloud as much as possible ► Be prepared to apply different resource management polices to Exchange VMs vs. other workloads which may be less mission critical ► Use private cloud as pre-built infrastructure, not necessarily dynamic – Based on deployment sizing, understand overall resource requirements and allocate accordingly from pool of cloud resources
67
Resource Allocation & Balancing ► Disable hypervisor-based auto tuning features – Dynamic memory – Storage tuning/rebalancing ► Exchange Mailbox role IOPS heavily dependent on ESE cache, dynamic memory can negatively impact ► Size for calculated resource requirements – no reliance on dynamic tuning should be needed
68
Enhancing Exchange on Hyper-V
69
System Center Integration Protect Exchange up to every 15 minutes ‘Lossless Restore’ – within 1 committed transaction Understands & protects scenarios such as DAG Proactively monitor Exchange infrastructure Drive down the cost of ensuring service levels New Correlation Engine help to monitor more efficiently Rapidly provision Exchange infrastructure Automate migrations for load balancing of VMs Seamlessly migrate workloads
70
Citrix NetScaler VPX ► Hyper-V Virtual Appliance – Can be deployed through SCVMM ► Speed up deployment with application templates & configuration wizards ► Once deployed, NetScaler will load balance Exchange 2010 Client Access servers ► Ships with PRO-enabled OpsMgr MP – Load balance, optimize, secure & report – Dynamically scale infrastructure based on monitoring
71
Partner Evidence: Exchange Virtualization ► HP recommended configurations for Exchange 2010 virtualization using Hyper-V R2 – Sizing for 20,000 users, 512 MB mailboxes All roles virtualized and Live Migration for MBX servers Hardware: HP ProLiant BL460c G6 (BL460c G6) server blade and HP StorageWorks 8400 Enterprise Virtual Array (EVA8400) storage systems – http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-2127ENW.pdf http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-2127ENW.pdf – Sizing for 5,000 users, 256 MB mailboxes All roles virtualized and Live Migration for MBX servers Hardware: HP ProLiant BL460c G6 (BL460c G6) server blades and HP LeftHand P4500 (P4500) storage systems – http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-9895ENW.pdf http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-9895ENW.pdf
72
Summary ► Why Microsoft Virtualization for Microsoft Server Applications? ► Why Hyper-V? ► ESG Performance Results ► Exchange Server Virtualization Best Practices ► Surrounding Ecosystem ► Resources
73
Related Content Breakout Sessions: EXL303: Exchange Server 2010 : High Availability Concepts EXL304 : Load Balancing with Microsoft Server 2010 EXL308 : Exchange Virtualisation : Is it right for you? EXL306: Best Practices for Virtualization Of Microsoft Exchange 2010 SVR-VIR308 : iSCSI: Getting Best Performance, High-Availability and Virtualisation VIR307 : Understand How Microsoft Virtualisation Compares to VMWare VIR/COS301 : Best Practises for Private Cloud Implementation
74
Track Resources Windows Azure - http://www.microsoft.com/windowsazure/http://www.microsoft.com/windowsazure/
75
Enrol in Microsoft Virtual Academy Today Why Enroll, other than it being free? The MVA helps improve your IT skill set and advance your career with a free, easy to access training portal that allows you to learn at your own pace, focusing on Microsoft technologies. What Do I get for enrolment? ► Free training to make you become the Cloud-Hero in my Organization ► Help mastering your Training Path and get the recognition ► Connect with other IT Pros and discuss The Cloud Where do I Enrol? www.microsoftvirtualacademy.com Then tell us what you think. TellTheDean@microsoft.com
76
© 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. (c) 2011 Microsoft. All rights reserved.
77
www.msteched.com/Australia Sessions On-Demand & Community http:// technet.microsoft.com/en-au Resources for IT Professionals http://msdn.microsoft.com/en-au Resources for Developers www.microsoft.com/australia/learning Microsoft Certification & Training Resources Resources (c) 2011 Microsoft. All rights reserved.
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.