Presentation is loading. Please wait.

Presentation is loading. Please wait.

Microsoft Exchange Best Practices and Design Guidelines on EMC Storage

Similar presentations


Presentation on theme: "Microsoft Exchange Best Practices and Design Guidelines on EMC Storage"— Presentation transcript:

1 Microsoft Exchange Best Practices and Design Guidelines on EMC Storage
Exchange 2010 and Exchange 2013 VNX and VMAX Storage Systems Strategic Solutions Engineering Updated – October 2013

2 Topics Exchange – What has changed Exchange virtualization
Exchange storage design and best practices Exchange backup best practices Additional References

3 Exchange – What has changed?
Exchange User Profile changes Exchange I/O characteristics Background Database Maintenance Database Availability Group (DAG) Storage choices

4 Exchange…What has changed
◊ 64-bit Windows ◊ 32+ GB database cache ◊ 8Kb block size ◊ 1:1 DB read/write ratio ◊ 70% reduction in IOPS from Exchange 2003 Exchange 2010 ◊ 100GB database cache (DAG) ◊ 32Kb block size ◊ 3:2 DB read/write ratio ◊ 70% reduction in IOPS from Exchange 2007 Exchange 2013 ◊ 33% reduction in IOPS from Exchange 2010

5 Exchange User Profile Changes
Messages sent/ received per mailbox per day Exchange 2010 Estimated IOPS per mailbox (Active or Passive) Exchange 2013 Estimated IOPS per mailbox (Active or Passive) Mailbox resiliency Stand-alone 50 0.05 0.06 0.034 100 0.100 0.120 0.067 150 0.150 0.180 0.101 200 0.200 0.240 0.134 250 0.250 0.300 0.168 300 0.360 0.201 350 0.350 0.420 0.235 400 0.400 0.480 0.268 450 0.450 0.540 0.302 500 0.500 0.600 0.335

6 Exchange Processor Requirements Changes
Messages sent or received per mailbox per day Megacycles per User Active DB Copy or Standalone (MBX only) Active DB Copy or Standalone (Multi-Role) Passive DB Copy Exchange 2010 Exchange 2013 50 1 2.13 N/A 2.66 0.15 0.69 100 2 4.25 5.31 0.3 1.37 150 3 6.38 7.97 0.45 2.06 200 4 8.50 10.63 0.6 2.74 250 5 13.28 0.75 3.43 300 6 12.75 15.94 0.9 4.11 350 7 14.88 18.59 1.05 4.80 400 8 17.00 21.25 1.2 5.48 450 9 19.13 23.91 1.35 6.17 500 10 26.56 1.5 6.85

7 Exchange I/O Characteristics
User IOPS decreased for Exchange Server 2010/2013 but the size of the I/O has significantly increased I/O Type Exchange 2007 Exchange 2010 Exchange 2013 Database I/O 8 KB random write I/O 32 KB random I/O Background Database Maintenance (BDM) I/O N/A 256KB Sequential Read I/O Log I/O Varies in size from bytes to the log buffer size (1 MB) Varies in size from 4 KB to the log buffer size (1 MB)

8 Exchange 2010/2013 mailbox database I/O read/write ratios
Messages sent/received per mailbox per day Stand-alone databases Databases participating in mailbox resiliency 50 1:1 3:2 100 150 200 250 300 2:3 350 400 450 500

9 Understanding Exchange I/O
Exchange 2010/2013 I/O to the database (.edb) are divided into two types: Transactional I/O (aka user I/O) Database volume I/O (database reads and writes) Log volume I/O (logs reads and writes) Only database I/O’s are measured when sizing storage and during Jetstress validation Non Transactional I/O Background Database Maintenance (Checksum) (BDM) For more details see “Understanding Database and Log Performance Factors” at

10 Background Database Maintenance (BDM)
What is BDM and what does it do? It is the process of Exchange Server 2010/2013 database maintenance that includes online defragmentation and online database scanning Both active and passive database copies are scanned Exchange 2010 Exchange 2013 Read I/O size 256 KB Database scan completion 1 week every 4 weeks IOPS per database 30 9 Bandwidth 7.5 MB/s* 2.25 MB/s* * Based on EMC testing with JetStress 2010/2013

11 Background Database Maintenance (BDM)
Both active and passive database copies are scanned On active copy can be scheduled to run during the online maintenance window (default is 24 x 7) Passive copy is ”hardcoded” to 24 x 7 scan Jetstress has no concept of passive copy, all are active Possible BDM related issues (mostly for Exchange 2010): Bandwidth/throughput required for BDM and BDM IOPS Not enough FE ports, not enough BE ports, improper RAID configuration

12 Exchange Content Index Considerations
Content Indexing space considerations: In Exchange 2010 content index space is estimated at about 10% of the database size. In Exchange 2013 content index space is estimated at about 20% of the database size. Additional 20% must be added for content indexing maintenance tasks (such as the master merge process) to complete.

13 Database Availability Group
Exchange High Availability Database Availability Group (DAG) Base component of the high availability and site resilience framework built into Exchange 2010/2013 A DAG is a group of servers participating within a Windows failover cluster with a limit of 16 servers and 100 databases.  All servers participating within a DAG can have a copy of any database within the DAG Each DAG member server can house one copy of each database, up to 16 copies, with only one being active, passive, or lagged No configuration of cluster services are required, Exchange 2010/2013 handles the entire installation During site DR manual work, scripts must be run A DAG does not provide recovery for logical database corruption Database Availability Group MBX1 MBX2 MBX3 A P P DB1 Copy Copy P A P Copy DB2 Copy P P A Copy Copy DB3 A = Active P = Passive

14 Exchange High Availability
Guidance for deploying DAGs Ensure all elements of the design have resilient components Storage processors Connectivity to the servers Storage spindles Multiple arrays in DR scenarios DAG copies should be stored on separate physical spindles Provided all resiliency is reached at the source site On SANs, consider performance of the passive and active copies within one array

15 EMC offers both options
Exchange Storage Options DAS or SAN? EMC offers both options For small, low-cost = DAS For large-scale efficiency = SAN Best long-term TCO = SAN Virtualization ready = SAN Iomega VNXe VNX VMAX Understand which storage type best meets design requirements Physical or virtual? Dedicated for Exchange or shared with other applications? Follow EMC proven guidance for each platform Proven Solutions Whitepapers EMC storage solution submissions to Microsoft ESRP program

16 Exchange Server IOPS Per Disks
Use the following table for IOPS per drive values when calculating disk requirements for Exchange 2010/2013 Recommendations may change based on future test results Disk type Exchange 2010/ database IOPS per disk (random workload) Exchange Server /2013 database logs IOPS per disk (Sequential workload) VNX/VMAX VNX VMAX VNX and VMAX 7.2 K rpm NL-SAS/SATA 65 60 180 10 K rpm SAS/FC 135 130 270 15 K rpm SAS/FC 450 Flash 1250 2000

17 Exchange Virtualization
Recommendations and supportably Design best practices references Supported deployment configurations Configuration best practices

18 Exchange 2010/2013 virtualization
Recommendations and supportably Virtualization is a proven technology and cloud ready Exchange is not virtualization aware but virtualization friendly EMC recommends virtualizing Exchange for most deployments based on user requirements Supported on Hyper-V and VMware, and other hypervisors Hypervisor vendors must participate in the Windows Server Virtualization Validation Program (SVVP)

19 Exchange virtualization
Exchange 2010/2013 supported deployment configurations (example) Exchange DAG on stand-alone hypervisor hosts Exchange stand-alone Mailbox servers on a hypervisor host clusters Exchange DAG with hypervisor host clusters DAG DAG VMware DRS/HA or Windows Cluster VMware DRS/HA or Windows Cluster

20 Exchange virtualization
Configuration best practices - General Know your hypervisors limits: 256 SCSI disks per host (or cluster) Processor limits (vCPUs per virtual machine) Memory limits Be aware of the hypervisor CPU overhead: Microsoft Hyper-V: ~10-12% VMware vSphere: ~5-7%

21 Exchange virtualization
Configuration best practices Core Exchange design principles still apply Design for performance and high availability Design for user workloads Size virtual machines specific to the Exchange role Physical sizing still applies Physical hypervisor server must accommodate the guests they will support DAG copies must be spread out across physical hosts to minimize outage in case of physical server issues

22 Exchange virtualization
Configuration best practices - VM placement recommendations Use common sense when placing virtual machines Deploy VMs with the same role across multiple hosts Do not deploy MBX virtual machines from the same DAG on the same host server

23 Exchange virtualization
Configuration best practices Disable migration technologies that save state and migrate Always migrate live or completely shut down virtual machines Dedicate/reserve CPU and memory to the Mailbox virtual machines and do not over commit pCPU to vCPU ratios: 2:1 is OK, 1:1 is a best practice Disable hypervisor-based auto tuning features No dynamic memory Hypervisor server should have at least 4 paths (HBA/CNA/iSCSI) to the storage – 4 total ports Install EMC PowerPath on Hypervisor server for maximum throughput, load balancing, path management, and I/O path failure detection

24 Exchange virtualization
Configuration best practices - Storage Exchange storage should be on spindles separate from guest OS physical storage Exchange storage must be block-level Network attached storage (NAS) volumes are not supported No NFS, SMB (other than SMB 3.0), or any other NAS technology Storage must be fixed VHD/VHDX/VMDK, SCSI pass-through/RDM or iSCSI Hyper-V Live Migration suggests Cluster Shared Volumes with fixed VHD (faster “black-out” period)

25 SMB 3.0 Support Only in virtualized configurations
VHDs can reside on SMB 3.0 shares presented to Hyper-V host No support for UNC path for Exchange db and log volumes (\\server\share\db1\db1.edb)

26 Supported SMB 3.0 Configuration Example

27 Exchange virtualization
Configuration best practices – Hyper-V Storage Virtual SCSI (pass-through or fixed disk) VHD on host – recommended for OS, program files Pass-through disk on host - recommended for Exchange database and log volumes iSCSI iSCSI direct from a guest virtual machine iSCSI initiator on host and disk presented to guest as pass-through ISCSI initiator from guest performs well and is easier to configure MPIO or EMC PowerPath – PowerPath recommended

28 Exchange virtualization
Configuration best practices – VMware VMFS and RDM Trade-offs VMFS RDM Volume can host many virtual machines (or can be dedicated to one virtual machine) Maps a single LUN to one virtual machine; isolated I/O Increases storage utilization, provides better flexibility, easier administration, and management More LUNs = easier to hit the LUN limit of 256 that can be presented to ESX Server Can’t have hardware enabled VSS backups Required for hardware VSS and replication tools that integrate with Exchange databases Large third-party ecosystem with V2P products to aid in certain support situations Can help reduce physical to virtual migration time Not supported for shared-disk clustering Required for shared-disk clustering Full support for VMware Site Recovery Manager

29 Virtualized Exchange Best Practices
References For Hyper-V: Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-V Best Practices for Virtualizing and Managing Exchange 2013 For VMware: Microsoft Exchange 2010 on VMware Best Practices Guide Microsoft Exchange 2010 on VMware Design and Sizing Examples Microsoft Exchange 2013 on VMware Best Practices Guide Microsoft Exchange 2013 on VMware Availability and Recovery Options Microsoft Exchange 2013 on VMware Design and Sizing Guide

30 Exchange Storage Design and Best Practices
Mailbox Server storage design methodology Recommendations and supportably Design best practices references Supported deployment configurations Configuration best practices

31 Exchange Mailbox Server Storage Design Methodology
Phase 1: Gather requirements Total number of users Number of users per server User profile and mailbox size User concurrency High availability requirements (DAG configuration) Backup and restore SLAs Third party software in use (archiving, Blackberry, etc.) Phase 2: Design the building block and storage architecture Design the building-block using Microsoft and EMC best practices Design the storage architecture using EMC best practices Use EMC Proven Solutions whitepapers Use Exchange Solution Review Program (ESRP) documentation Phase 3: Validate the design Use Microsoft Exchange validation tools: Jetstress - for storage validation LoadGen - for user workload validation and end-to-end solution validation

32 Exchange Storage Design
Title Month Year Exchange Storage Design Exchange building-block design methodology What is a building-block? A building-block represents the required amount of resources needed to support a specific number of Exchange users on a single server or VM Building blocks are based on requirements and include: Compute requirements (CPU, memory, and network) Disk requirements (database, log, and OS) Why use the building-block approach? Can be easily reproduced to support all users with similar user profile characteristics Makes Exchange environment additions much easier and straightforward, helpful for future environment growth Has been very successful for many real-world customer implementations See Appendix for building-block design process

33 Exchange Storage Design
Title Month Year Exchange Storage Design Exchange storage sizing Do not solely rely on automated tools when sizing your Exchange environment Place time and effort into your calculations, and provide supporting factual evidence on your designs rather than providing fictional calculations Size Exchange based on I/O, mailbox capacity and bandwidth requirements Factor in other overhead variables such as archiving, snapshots protection, virus protection, mobile devices, and risk factor Confirm Exchange storage requirements with specific array sizing tools

34 Exchange Storage Design
Title Month Year Exchange Storage Design Exchange Sizing Tools options EMC Exchange Designer with DIVA: Microsoft Exchange Server Role Requirements Calculator Exchange 2010: Exchange 2013: VSPEX Sizing Tool Specific array sizing tools, i.e. VNX Disksizer Manual calculation for advanced administrators

35 General Storage Design Guidance
Best Practices Isolate the Exchange server workload to its own set of spindles from other workloads to guarantee performance When sizing, always calculate I/O requirements and capacity requirements Separate the database and logs onto different volumes Deploy DAG copies on separate physical spindles Databases up to 2 TB in size are acceptable when DAG is being used: The exact size should be based or customer requirements Ensure that your solution can support LUNs larger than 2 TB

36 General Storage Design Guidance
Best Practices - continued Consider backup and restore times when calculating the database size Spread the load as evenly as possible across array resources, V-Max Engines, VNX SPs, back-end buses, etc. Always format Windows NTFS volumes for databases and logs with an allocation unit size of 64K Use an Exchange building block design approach whenever possible

37 General Storage Design Guidance - VNX
Pools or RAID Groups? Either method works well and provides the same performance (thick pools versus RGs) RAID groups are limited to 16 disks per RG, pools can support many more disks Pools are more efficient and easier to manage Use pools if planning to use advanced features such as: FAST VP, VNX Snapshots Storage pools can support a single building block or multiple building blocks based on customer requirements Design and expand pools using the correct multiplier for best performance (R1/0 4+4, R5 4+1, R6 6+2)

38 General Storage Design Guidance - VNX
Thick or Thin LUNs? Both Thick and Thin LUNs can be used for Exchange storage (database and logs) Thick LUNs are recommended for heavy workloads with high IOPS user profiles Thin LUNs are recommended for light to medium workloads with low IOPS user profiles Benefits: significantly reduces initial storage requirements Use VNX Pool Optimizer before formatting volumes Can enable FAST Cache or FAST VP for fast metadata promotions

39 Exchange Storage Design with FAST VP
Configuration recommendations (VNX and VMAX) Separate data from logs, due to different workloads Data – random workload with skew; high FAST VP benefit Logs – sequential data without skew; no FAST VP benefit Use dedicated pools Provides a better SLA guarantee Provides fault domains Microsoft recommends for most deterministic behavior Use Thick Pool LUNs for highest performance (on VNX) Thin Pool LUNs are acceptable with Flash in FAST Cache or Pool Use Thin LUNs on VMAX

40 Exchange Storage Design
FAST Cache (VNX only) FAST Cache allows the storage system to provide Flash drive class performance to the most heavily accessed chunks of data across the entire system Absorbs I/O bursts from applications, thus reducing the load on back-end hard disks. Automatically absorbs pools metadata Improves performance of the storage solution Can be enabled/disabled on per storage pool bases

41 Exchange Storage Design
FAST Cache (VNX only) FAST Cache usage Pools with Thin LUNs for metadata tracking Pools with Thin and Thick LUNs when VNX Snapshots are used Pools with Thick LUNs Not required but not restricted either Required with VNX Snapshots  FAST Cache Sizing guidance Rule of thumb: for every 1 TB of Exchange dataset, provision 1 GB of FAST Cache Monitor and adjust the FAST Cache size, your mileage may vary Enable FAST Cache on pools with database LUNs only

42 Exchange Storage design - VMAX
Ensure that the initial disk configurations can support the I/O requirements Can configure a thin pool to support a single Exchange building block or multiple building blocks, depending on customer requirements Use Unisphere for VMAX to monitor the thin pool utilization and prevent the thin pools from running out of space Install the Microsoft hotfix KB on the Windows Server 2012 hosts in your environment.

43 Exchange Storage design - VMAX
Design Best Practices Use Symmetrix Virtual Provisioning Can share database and log volumes across the same disks, but separate them into different LUNs on the same hosts For optimal Exchange performance, use striped meta volumes

44 VMAX FAST VP with Exchange
Design Best Practices When designing FAST VP for Exchange 2010/2013 on VMAX follow these guidelines: Separate databases and logs onto their own volumes Can share database and log volumes across the same disks. Exclude transaction log volumes from the FAST VP policy or pin all the log volumes into the tier on which they are created Select Allocate by FAST Policy to allow FAST VP to use all tiers for new allocations based on the performance and capacity restrictions New feature introduced in the Enginuity™ 5876 code When using FAST VP with Exchange DAG, do not place DAG copies of the same database in the same pool on the same disks

45 Exchange Validation Tools
Exchange JetStress and Load Generator Tools - Exchange 2010 Mailbox Server Role Requirements Calculator - to-the-exchange-2010-mailbox-server-role-requirements- calculator.aspx

46 XtremCache with Exchange
What is XtremCache? XtremCache is a server Flash caching solution that reduces latency and increases throughput to improve application performance by leveraging intelligent software and PCIe Flash technology. XtremCache accelerates block I/O reads for those applications that require the highest IOPS and/or the lowest response time. XtremCache accelerates reads and protects data using a write-through cache to the networked storage to deliver persistent high availability and disaster recovery. Works with array-based FAST and FAST Cache software Optimized for both physical and virtual environments

47 Why XtremCache for Exchange?
Consider XtremCache for Exchange if: You have an I/O bound Exchange solution You are not sure about your anticipated workload You need to guarantee high performance and low latency for specific users (VIP servers, databases, and so on) XtremCache proven to improve Exchange performance by: Reducing read latencies Increasing I/O throughput Eliminating almost all high latency spikes Providing more improvements as workload increases Reducing RPC latencies Reducing writes to the back-end storage with XtremCache deduplication

48 XtremCache with Exchange
Configuration recommendations XtremCache PCI Flash card can be installed on A physical Exchange Mailbox server Hypervisor server hosting Exchange Mailbox virtual machines (VMware or Hyper-V) Enable XtremCache acceleration only on database volumes XtremCache sizing guidance: For a 1,000 GB working dataset, configure 10 GB of XtremCache device

49 XtremCache with Exchange
Configuration recommendations When implementing XtremCache with VMware vSphere, consider the following: Size of the PCI cache card to deploy Number of Exchange virtual machines deployed on each vSphere host that will be using XtremCache Exchange workload characteristics (read:write ratio, user profile type) Exchange dataset size The most benefits will be achieved when all reads from a working dataset are cached

50 XtremCache with Exchange
Configuration recommendations When adding a XtremCache device to an Exchange virtual machine: Set cache page to 64KB and Max IO size to 64 (BDM I/O will not be cached) Can use VSI Plug-in or XtemCache CLI command to set the cache page size to 64 KB and max I/O size to 64 KB when adding the cache device to a virtual machine: vfcmt add -cache <cache_device> -set_page_size 64 -set_max_io_size 64)

51 XtremCache with Exchange
Configuration recommendations (deduplication) Evaluate your workload before considering enabling deduplication for accelerated Exchange LUNs Consider CPU overhead when enabling deduplication Set the deduplication ratio based on workload characteristics: If the observed deduplication ratio is less than 10%, EMC recommends that you turn it off (or set it to 0%), which enables you to benefit from extended cache device life. If the observed ratio is over 35%, EMC recommends that you raise the deduplication gain to match the observed deduplication. If the observed ratio is between 10% and 35%, EMC recommends that you leave the deduplication gain as it is.

52 Backup Best Practices Backup options
Hardware and software VSS providers Backup best practices

53 Exchange Backup Best Practices
Title Month Year Exchange Backup Best Practices Make sure you understand the importance of backups and ask if there are long term retention requirements Using DAG HA copy/lagged copy for point-in-time backups is an option, however consider the following: Backups are often subject to regulatory requirements Extended time retention There are scenarios that DAG HA/lagged copy will not address If lagged copy is activated, new copy must created again 53

54 Exchange Backup Best Practices
Title Month Year Exchange Backup Best Practices Streaming backups are no longer allowed or supported in Exchange 2010/2013 EMC recommends leveraging VSS technology for consistent replication and backup of Exchange databases and log files because it: Provides rapid recovery through hardware VSS providers Decreases the hardware requirements for backups Integrates data deduplication mechanisms Alleviates hardware and resource contention at the server level 54

55 Exchange Backup Best Practices
Title Month Year Exchange Backup Best Practices There are two VSS providers: hardware-based and software-based. Know the difference: Hardware-based providers: Act as an interface between Volume Shadow Copy Service and the hardware level by working with a hardware storage adapter or controller Perform a shadow copy by the storage appliance outside of the OS Depend on hardware to take the clone/snapshot Include VNX SnapView, VNX Snapshots, and Symmetrix TimeFinder Software-based providers: Intercept and process I/O requests in a software layer between the file system and volume manager Are implemented as user-mode DLL and a kernel mode device drivers. The shadow copy is performed by the software Are not dependent on hardware, so there is a wider range of platform support Include Avamar, Networker NMM, and NetBackup 55

56 Exchange Backup Best Practices
Title Month Year Exchange Backup Best Practices Snaps are preferred with VSS Consistency checks are no longer required with two or more DAG copies Less storage space required Protected clones are possible, but consider the following: Larger database sizes Perform backup from a passive DAG copy Activity during consistency check, backup Backup and restore have the same granularity with hardware VSS: Recovery is performed at the LUN level. Consider this during design and layout Consider leveraging AppSync with ItemPoint for single item recovery 56

57 Exchange Validated Solutions

58 Exchange 2013 ESRP on VNX5700

59 Exchange with XtremSW Cache

60 VSPEX Solutions for Exchange

61 Solution Completed in March 2013
Accelerating Microsoft Exchange performance with EMC XtremCache EMC VNX Storage and VMware vSphere Solution Completed in March 2013

62 Solution Architecture

63 Exchange 2010 Building block details
Total number of mailboxes per server 5,000 mailboxes/server Mailbox size 1.5 GB per user User profile 150 messages/user/day (0.150 IOPS) Target average message size 75 KB Database Design 6 Databases/server 833 user per DB DB size ~1300 GB (LUN size 1650 GB) Log Design 6 Log LUNs (90 GB LUN size) Number of Exchange Mailbox VMs per ESX 3 Disk configuration per Server 18 (16 DB+2 Logs), 2TB NL-SAS Drives Memory/CPU per VM Recommended 32 GB RAM, CPU Megacycles Deleted items retention window (“dumpster”) 14 days Logs protection buffer 3 days 24 x 7 BDM configuration Enabled Database read/write ratio 3:2 (in Mailbox Resiliency configuration)

64 Storage design with XtremSW Cache
Two storage pools created for databases 48 x 2 TB 7.2k rpm NL-SAS drives per pool, RAID 1/0 Each pool contains multiple copies from different VM’s 3 Building blocks (3 VM’s) 18 x 1.6 TB LUNs (6 LUNs per VM) 326 GB VFMS datastore is created from the XtremCache PCI card on each vSphere server 50 GB cache devices created for each Exchange VM from the VMFS cache datastore Remaining capacity reserved for VM’s that can be migrated from the other vSphere server

65 Additional References

66 Additional References
Exchange Storage Best Practices and Design Guidelines for EMC Storage whitepaper: design-guid-emc-storage.pdf EMC Community Network EMC and Partner Exchange 2010 Tested Virtualized Solutions Exchange Solution Reviewed Program Submissions (ESRP) Exchange Mailbox Server Storage Design (Microsoft TechNet)

67 Additional References
Exchange virtualization supportability guidance - Understanding Exchange Performance - Server Virtualization Validation Program - Exchange 2010 EMC-tested OEM solutions (on Hyper-V) 20,000 users on EMC storage with virtual provisioning 32,400 users on EMC storage with EMC REE

68 Building block design process ESI for VNX Pool Optimization
Appendix Building block design process ESI for VNX Pool Optimization

69 Building block design process

70 Exchange Mailbox Server Storage Design Methodology
Phase 1: Gather requirements Total number of users Number of users per server User profile and mailbox size User concurrency High availability requirements (DAG configuration) Backup and restore SLAs Third party software in use (archiving, blackberry, etc.) Phase 2: Design the building block and storage architecture Design the building block using Microsoft and EMC best practices Design storage architecture using EMC best practices Leverage EMC Proven Solutions whitepapers Leverage Exchange Solution Review Program (ESRP) documentation Phase 3: Validate the design Use Microsoft Exchange validation tools Jetstress - for storage validation LoadGen - for user workloads validation and end-to-end solution validation

71 Requirements Gathering Example
Item Value Exchange version. Total number of active users (mailboxes) in the environment Exchange 2013, 20,000 Site resiliency requirements Single site Storage Infrastructure SAN Type of deployment (Physical or Virtual) Virtual (VMware vSphere) HA requirements One DAG with two database copies Mailbox size limit 2 GB max quota User profile 200 messages per user per day (0.134 IOPS) Target average message size 75 KB Outlook mode Cached mode, 100 percent MAPI Number of mailbox servers 8 Number of mailboxes per server 5,000 (2,500 active/2,500 passive) Number of databases per server 10 Number of users per database 500 Deleted items retention (DIR) period 14 days Log protection buffer (to protect against log truncation failure) 3 days BDM configuration Enabled 24 x7 Database read/write ratio 3:2 (60/40 percent) in a DAG configuration User concurrency requirements 100 percent Third-party software that affects space or I/O (for example, Blackberry, snapshots) Storage snapshots for data protection Disk Type 3 TB NL-SAS (7,200 rpm) Storage platform VNX

72 Building Block design Define and design Building Block In our example we are defining a building block as: A mailbox server that will support 5,000 users 2,500 users will be active during normal runtime and the other 2,500 users will be passive until a switchover from another mailbox server occurs. Each building block will support two database copies.

73 Building block sizing and scaling process
Perform calculations for IOPS requirements Perform calculations for capacity requirements based on different RAID types Determine the best option Scale building block Multiple building blocks may be combined together to create the final configuration and storage layout (pools or RGs)

74 Building block sizing and scaling process
Front-end IOPS ≠ Back-end IOPS Front-end IOPS = Total Exchange Mailbox server IOPS Back-end IOPS = Storage array IOPS (including RAID penalty) Understand disk IOPS by RAID type Block front-end Exchange application workload is translated into a different back-end disk workload based on the RAID type in use. For reads there is no impact of RAID type: 1 application read I/O = 1 back-end read I/O For random writes like Exchange: RAID 1/0: 1 application write I/O = 2 back-end write I/O RAID 5: 1 application write I/O = 4 back-end disk I/O (2 read I/O + 2 write I/O) RAID 6: 1 application write I/O = 6 back-end write I/O (3 read I/O + 3 write I/O)

75 Database IOPS requirements
Formula & Calculations Total transactional IOPS = IOPS per mailbox * mailboxes per server + (Microsoft recommended overhead %) Total transactional IOPS = 5,000 users * IOPS per user + 20% Microsoft recommended overhead = = 804 IOPS Total front-end IOPS = (Total transactional IOPS) + (EMC required overhead %) Total Front-end IOPS = % EMC required overhead = 965 IOPS (rounded-up from 964.8)

76 Database Disks requirements for Performance (IOPS)
Formula Disks required for Exchange database IOPS = (Total backend database Read IOPS) + (Total backend database Write IOPS)/ Exchange random IOPS per disk Where: Total back-end database read IOPS = (Total Front-end IOPS) * (% of Reads IOPS) Total back-end database write IOPS = RAID Write Penalty *(Total Front-end IOPS * (% of Write IOPS)

77 Database Disks requirements for Performance (IOPS)
Calculations RAID Option RAID Penalty Disks required RAID 1/0 (4+4) 2 (965 x 0.60) + 2(965 x 0.40) = = 1351 / 65 = 21 (round-up to 24 disks) RAID 5 (4+1) 4 (965 x 0.60) + 4(965 x 0.40) = = 2123 / 65 = 33 (round-up to 35 disks) RAID 6 (6+2) 6 (965 x 0.60) + 6(965 x 0.40) = = 2895 / 65 = 45 disks (round-up to 48 disks)

78 Transactional logs IOPS requirements
Formula & Calculations Disks required for Exchange log IOPS = (Total backend database Write IOPS * 50%) + (Total backend database Write IOPS * 10%)/ Exchange sequential IOPS per disk Disks required for Exchange log IOPS = (772 back-end write IOPS * 50%) + (772 *10%))/ 180 sequential Exchange IOPS per disk = ( )/180 = 2.57(round-up to 4 disks)

79 Storage capacity calculations
Formula Calculate User Mailbox on Disk Calculate Database Size on Disk Calculate Database LUN Size Mailbox size on disk = Maximum mailbox size + White space + Dumpster Database size on disk = number of mailboxes per database * mailbox size on disk Database LUN size = Number of mailboxes * Mailbox size on disk * (1 + Index space + additional Index space for maintenance) / (1 + LUN free space)

80 Mailbox size on disk Formula
Mailbox size on disk = Maximum mailbox size + White space + Dumpster Where: Estimated Database Whitespace per Mailbox = per-user daily message profile * average message size Dumpster = (per-user daily message profile * average message size * deleted item retention window) + (mailbox quota size * 0.012) + (mailbox quota size * 0.03)

81 Mailbox size on disk Calculations
White space = 200 messages /day * 75KB = 14.65MB Dumpster = (200 messages/day * 75KB * 14 days) + (2GB * 0.012) + (2GB x 0.03) = = MB Mailbox size on disk = 2GB mailbox quota MB database whitespace MB Dumpster = 2,354 MB (2.3GB)

82 Database Size On Disk & LUN size
Calculations Database size on disk = 500 users per database * 2,354 MB mailbox on disk = 1,177 GB (1.15 TB) Database LUN size = 1,177 GB * ( ) / ( ) = 2,060 (2 TB) In our example: 20% added for the Index 20% added for the Index maintenance task 20% added for LUN-free space protection

83 Logs space calculations
Formula & Calculations Log LUN size = (Log size)*(Number of mailboxes per database)*(Backup/truncation failure tolerance days)+ (Space to support mailbox moves)/(1 + LUN free space) Log Capacity to Support 3 Days of Truncation Failure = (500 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 58.59GB Log Capacity to Support 1% mailbox moves per week = 500 mailboxes/database x 0.01 x 2.3GB mailbox size = 11.5GB Log LUN size = 58.59GB GB /( ) = GB (round-up to 88 GB)

84 Total Capacity per Building Block
Total LUN size capacity required per server = (Database LUN size per server) + (Log LUN size per server) * (Number of databases per server) LUN Capacity Type LUN Capacity Required per server Database LUN capacity 2,060 GB per LUN * 10 LUNs per server = 20,600 GB Log LUN capacity 88 GB per LUN * 10 LUNs per server = 880 GB Total LUN capacity per server 20, = 21,480 GB

85 Total number of disks required
Database disks Logs disks Disks required for Exchange database capacity = Total database LUN size / Physical Disk Capacity * RAID Multiplication Factor Disks required for Exchange log capacity = Total log LUN size) / Physical Disk Capacity * RAID Multiplication Factor

86 Disk requirements based on capacity
RAID Option Database Disks required RAID 1/0 (4+4) 20,600/ * 2 = 7.37 * 2 = (round-up to 16 disks) RAID 5 (4+1) 20,600/ * 1.25 = 7.37 * 1.25 = 9.2 (round-up to 10 disks) RAID 6 (6+2) 20,600/ * 1.33 = 7.37 * 1.33 = 9.8 (round-up to 16 disks) RAID Option Logs Disks required RAID 1/0 (1+1) 880 / 2,794.5 * 2 = 0.63 (round up to 2 disks)

87 Final storage calculation results
Building Block Summary Volume Type RAID Option Disks Required for Performance (IOPS) Disks Required for Capacity Best Option Exchange Databases RAID 1/0 (4+4) 24 disks 16 disks RAID 5 (4+1) 35 disks 10 disks RAID 6 (6+2) 48 disks Exchange Logs RAID 1/0 (1+1) 4 disks 2 disks Total disks per building block 28 disks

88 Final storage calculation results
Building Block Scalability Total number of disks required for entire 20,000 users solution in a DAG with two copies = 28 disk per building block * 8 building blocks = 224 disks total

89 Building block sizing and scaling process
Bandwidth calculations Array throughput MB/s validation for Exchange involves: Determining how many databases the customer will require Confirming the database LUNs are evenly distributed among the backend busses and storage processors. Determine if each bus can accommodate the peak Exchange database throughput Use this calculation to calculate the throughput required (DB throughput * number of DBs per bus) = Exchange DB throughput Compare that number with array bus throughput DB throughput = Total transactional (user) IOPS per DB * 32K + (BDM throughput per DB in MB/s) Number of DBs per bus = the total number of active and passive databases per bus

90 Storage Bandwidth Requirements
The process The bandwidth validation process involves the following steps: Determine how many databases in the Exchange environment Determine the bandwidth requirements per database Determine the required bandwidth requirements per array bus Determine whether each bus can accommodate the peak Exchange database bandwidth Use DiskSizer for VNX or contact your local storage specialist to get the array and bus throughput numbers DiskSizer is available through your local USPEED contact Evenly distribute database LUNs among the back-end buses and storage processors Uniformed distribution is key for best performance FE/BE/RAID Group/POOLs & DAEs DBs uniformly distributed onto the pools Use even numbers on the SAS loops (0 & 2) for max performance

91 Storage Bandwidth Requirements
Calculations Bandwidth per database (MB/s) = Total transactional IOPS per database * 32 KB + Estimated BDM Throughput per database (MB/s) Where: 32 KB is an Exchange page size Estimated BDM throughput per database is 7.5 MB/s for Exchange 2010 and 2.25 MB/s for Exchange 2013 Required throughput MB/s per bus = (throughput MB/s per database) * (total number of active and passive databases per bus)

92 Storage Bandwidth Requirements
Calculations Total transactional IOPS per database = (500 * * 32 KB = 2.1 MB/s  Throughput per database = 2.1 MB/s MB/s = 4.35 MB/s  Required throughput per bus = 4.35 MB/s * 200 databases per bus = 870 MB/s Example assumptions: 500 users at IOPS per database 200 databases per bus If the array supports a maximum throughput of 3,200 MB/s per bus, 200 databases can be supported from a throughput perspective.

93 Final Design Configured dedicated storage pools for each mailbox server with 24 x 3 TB NL-SAS drives. Each storage pool holds two copies from different mailbox servers. Separated Exchange log files into different storage pools For better storage utilization created one storage pool with 16 x 3 TB NL-SAS drives for logs per four mailbox server building blocks.

94 Storage Design validation
Exchange Server Jetstress Uses Exchange executables to simulate I/O load (use same version) Initialized and executed during pre-production before Exchange Server is installed Throughput and mailbox profile tests – Pass gives confidence that storage design will perform as designed Exchange Server Load Generator (LoadGen) (optional) Validation must be performed in isolated lab Produces a simulated client workload against a test Exchange deployment Estimate number of users per server and validate Exchange deployment LoadGen testing can take many weeks to configure and populate DBs

95 Storage Design validation
Exchange Solution Reviewed Program (ESRP) Results Microsoft program for validation of Storage vendor designs with Exchange Vendor runs multiple JetStress tests based on requirements for performance, stress, backup to disk, and log file replay Reviewed and approved by Microsoft

96 ESI for VNX Pool Optimization Utility (a.k.a. SOAP Tool)

97 VNX Storage Pool Optimizer Utility
What is it? A utility for optimization of pool-based LUNs (thin or thick) that allows to achieve the maximum performance Provides the best option for Exchange or any application requiring deterministic high performance across all LUNs in the pool equally Pre-allocates slices in a pool evenly across all disks and private raid groups

98 VNX Storage Pool Optimizer Utility
Why do I need to use it? To achieve the best performance for pool-based LUNs (primarily thin) and to pass Jetstress during pre- deployment storage validation To mitigate “Jetstress effect”

99 When to Use Optimizer Utility
Use Cases CX4/VNX (OE for Block prior to 5.32 ) - Thick LUNs only With VNX OE 5.32 for block thick pool LUNs are pre-allocated on creation VNX Rockies (OE for Block 5.33 ) – Thin and Thick LUNs (primarily Thin) Use SOAP tool utility with CX4 and VNX OE for Block Release 32 (Inyo) Use new ESI for VNX Pool Optimization utility with VNX OE for Block Release 33 (Rockies) The short-term plan is to merge both tools into one The long-term plan is to implement the functionality into native VNX code.

100 What is the problem? How is the issue surfaced? “JetStress effect”
With Jetstress testing first database on the Exchange server will: Experience higher latencies then the others when LUN is Thick Experience lower latencies then the others when LUN is Thin “JetStress effect” Jetstress data population results in imbalances of underlying virtual disks in a pool

101 Jetstress Initialization Process
How Jetstress initialization phase works: Jetstress creates first database It then creates other databases by copying the first database to other databases concurrently

102 Looking under the covers… Slice Maps
Without Optimization With Optimization

103 VMDK Optimization Without Optimization With Optimization

104 Options for Thick pool LUNs provisioning
VNX OE for Block Release 32 (Inyo) only Two options for thick pool LUNs provisioning For good and optimal performance – No SOAP is necessary. Use default pool LUNs provisioning via Unisphere, NaviSecCLI, EMC ESI, or EMC VSI For best performance (max IOPS) – Use NaviSecCLI to disable pre-provisioning and then run SOAP Turn off pre-provisioning via the CLI Run SOAP Re-enable pre-provisioning

105 Old SOAP Utility – Where and how?
Old SOAP Utility is available on EMC Online Support site Enter “soap” in the search and select “Support Tools” Must be used with CX4/VNX Inyo (OE 5.32) only Support Thick LUN optimization only Zip file contains the tool, step-by-step documentation, and demo video

106 ESI for VNX Pool Optimization Utility
Available for download from EMC Online Support site in November 2013 Can be used with Next Gen VNX Series(OE 5.33) only (VNX5200, VNX5400, VNX5600,VNX5800, VNX7600, VNX8000) Supports Thick and Thin LUN optimization

107


Download ppt "Microsoft Exchange Best Practices and Design Guidelines on EMC Storage"

Similar presentations


Ads by Google