Download presentation
Presentation is loading. Please wait.
Published byByron Tucker Modified over 9 years ago
5
Exchange 2003 Shared infrastructure with redundant hardware components Exchange 2010/2013/2016 Commodity building blocks with software controlled redundancy New architecture and design principles I/O Meter How much disk performance Exchange database needs?
7
Failures *do* happen! Critical system dependencies decrease availability Deploy Multi-role servers Avoid intermediate and extra components (e.g. SAN; network teaming; archiving servers) Simpler design is always better: KISS Redundant components increase availability Multiple database copies Multiple balanced servers Failure domains combining redundant components decrease availability Examples: SAN; Blade chassis; Virtualization hosts Software, not hardware is driving the solution Exchange powered replication and managed availability Redundant transport and Safety Net Load balancing and proxying to the destination Availability principles: DAG beyond the “A” http://blogs.technet.com/b/exchange/archive/2011/09/16/dag-beyond-the-a.aspx
8
Classical shared infrastructure design introduces numerous critical dependency components Relying on hardware requires expensive redundant components Failure domains reduce availability and introduce significant extra complexity
9
Scale the solution out, not in; more servers mean better availability Nearline SAS storage: provide large mailboxes by using large low cost drives Exchange I/O reduced by 93% since Exchange 2003 Exchange 2013/2016 database needs ~10 IOPS; single Nearline SAS disk provides 70+ IOPS; single 2.5” 15K rpm SAS disk provides 230+ IOPS 3+ redundant database copies eliminate the need for RAID and backups Redundant servers eliminate the need for redundant server components (e.g. NIC teaming or MPIO)
10
Google, Microsoft, Amazon, Yahoo! use commodity hardware for 10+ years already Not only for messaging but for other technologies as well (started with search, actually) Inexpensive commodity server and storage as a building block Easily replaceable, highly scalable, extremely cost efficient Software, not hardware is the brain of the solution Photo Credit: Stephen Shankland/CNET
12
Exchange Preferred Architecture: reference architecture from Exchange product group Exchange Product Line Architecture: Tightly scoped reference architecture offering from Microsoft Consulting Services Based on deployment best practices and customer experience Structured design based on detailed rule sets to avoid common mistakes and misconceptions Follows cornerstone design principles: 4 database copies across 2 sites; witness in 3 rd site Unbound Service Site model (single namespace) Multi-role servers DAS storage with NL SAS or SATA JBOD L7 load balancing (no session affinity) Large low cost mailboxes (25/50 GB standard mailbox size) Enable access for all internal / external clients System Center for monitoring Exchange Online Protection for messaging hygiene http://blogs.technet.com/b/exchange/archive/2014/04/21/the-preferred-architecture.aspx
18
Stretching your DAG http://aka.ms/partitioned-cluster-networks Don’t multiply complexities
19
Still itching to use three sites? How are the users connected? With CAS role decoupled from MBX (logically! Not physically) it doesn’t matter… unless…
21
Goal: Provide symmetric database copy layout to ensure even load distribution http://blogs.technet.com/b/exchange/archive/2010/09/10/3410995.aspx Server3 Failure Server6 Failure
23
High Availability (HA) is redundancy of solution components within a datacenter Site Resilience (SR) is redundancy across datacenters providing a DR solution Both HA and SR are based on native Exchange data replication Each database exists in multiple copies, one of them active Data is shipped to passive copies via transaction log replication over the network It is possible to use dedicated isolated network for Exchange data replication Network requirements for replication: Each active passive database replication stream generates X bandwidth The more database copies, the more bandwidth is required Exchange natively encrypts and compresses replication traffic Pros and cons for dedicated replication network => Not recommended Replication network can help isolating client traffic from replication traffic Replication network must be truly isolated along the entire data transfer path: having separate NICs but sharing the network path after the first switch is meaningless Replication network requires configuring static routes and eliminating cross talk; this leads to extra complexity and increases risk of human error If server NICs are 10Gbps capable, it’s easier to have a single network for everything No need for network teaming: think of a NIC as JBOD
27
Conceptually similar replication – goal is to introduce redundant copy of the data Software, not hardware powered Application aware replication Enables each server and associated storage as independent isolated building block Exchange 2013 is capable of automatic reseed using hot spare (no manual actions besides replacing the failed disk!) Finally, cost factor: RAID1/0 requires 2x storage (you still want 4 database copies for Exchange availability)!
28
Exchange mailboxes will grow but they don’t consume that much on Day 1 The desire not to pay for full storage capacity upfront is understood However, inability to provision more storage and extend capacity quickly when needed is a big risk Successful thin provisioning requires significant operational maturity and process excellence unseen in the wild Microsoft guidance and best practice is to use thick provisioning with low cost storage Incremental provisioning model can be considered a reasonable compromise
29
Exchange continuous replication is a native transactional replication (based on transaction data shipping) Database itself is not replicated (transaction logs are played back to target database copy) Each transaction is checked for consistency and integrity before replay (hence physical corruption cannot propagate) Page patching is automatically activated for corrupted pages Replication data stream can be natively encrypted and compressed (both settings configurable, default is cross site only) In case of data loss Exchange automatically reseeds or resynchronizes the database (depending on type of loss) If hot spare disk is configured, Exchange automatically uses it for reseeding (like RAID rebuild does)
30
Two mirrored (RAID1) disks for system partition (OS; Exchange binaries, transport queues, logs) One hot spare disk Nine or more RBOD disks (single disk RAID-0) for Exchange databases with collocated transaction logs Four database copies collocated per disk, not to exceed 2TB database size
31
There are servers that can house more than 12 LFF disks (up to 16 with rear bay) There are already DAS enclosures available that provide 720 TB capacity in a single 4U unit (90 x 8TB drives)! Scalability limits: still 100 database copies / server This means no more than 25 drives @ 4 databases/disk or 50 drives @ 2 databases/disk
34
Introduces additional critical solution component complexity and overhead Uses shared infrastructure concept and brings failure domains Consolidated roles is a guidance since Exchange 2010 – and now there is only a single role in Exchange 2016! Deploying multiple Exchange servers on the same host would create failure domain Hypervisor powered high availability is not needed with proper Exchange DAG designs No real benefits from Virtualization as Exchange provides equivalent benefits natively at the application level Could still make sense for small deployments helping consolidate workloads – but be careful with failure domains!
37
In today’s world, users access Exchange mailboxes from many clients and devices Cumulative client concurrency can be over 100% Penalty factors are measured in units of load created by a single Outlook client Some clients generate more server load than a baseline Outlook client Penalty factor should be calculated as weighted average across all types of clients
38
Four or more physical servers (collocated roles) in each DAG split symmetrically between two datacenter sites Four database copies (one lagged with lag replay manager) for HA and SR on DAS storage with JBOD; minimized failure domains Unbound Service site model with single unified load balanced namespace and Witness in the 3 rd datacenter
39
There are many Exchange design options in the wild Not everything that is supported is recommended PA formulates cornerstone design principles PLA presents a structured design based on strict rules PA/PLA is based on field experience and Office 365 practice Aligning with PA/PLA provides best co-existence with the cloud and simplifies Hybrid deployment If you cannot follow the PA principles and PLA rules, you are not mature enough to run Exchange – MOVE TO Office 365! Exchange On-Premises Custom Design Exchange On-Premises recommended best practices Exchange On-Premises PA / PLA Design Exchange Online Public Cloud (Office 365) Exchange Online Public Cloud (Office 365)
41
E-mail:borisl@microsoft.com Profile:https://www.linkedin.com/in/borisl Social:https://www.facebook.com/lokhvitsky
Similar presentations
© 2025 SlidePlayer.com. Inc.
All rights reserved.