Exchange 2003 Shared infrastructure with redundant hardware components Exchange 2010/2013/2016 Commodity building blocks with software controlled.

Slides:



Advertisements
Similar presentations
1/17/20141 Leveraging Cloudbursting To Drive Down IT Costs Eric Burgener Senior Vice President, Product Marketing March 9, 2010.
Advertisements

Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Minimising IT costs, maximising operational efficiency minimising IT costs, maximising operational efficiency Balance.
Ed Duguid with subject: MACE Cloud
Virtualisation From the Bottom Up From storage to application.
What’s New: Windows Server 2012 R2 Tim Vander Kooi Systems Architect
WS2012 File System Enhancements: ReFS and Storage Spaces Rick Claus Sr. Technical WSV316.
© 2009 IBM Corporation Statements of IBM future plans and directions are provided for information purposes only. Plans and direction are subject to change.
Chapter 4 Infrastructure as a Service (IaaS)
Take your CMS to the cloud to lighten the load Brett Pollak Campus Web Office UC San Diego.
Introduction to DBA.
Enhanced Availability With RAID CC5493/7493. RAID Redundant Array of Independent Disks RAID is implemented to improve: –IO throughput (speed) and –Availability.
REDUNDANT ARRAY OF INEXPENSIVE DISCS RAID. What is RAID ? RAID is an acronym for Redundant Array of Independent Drives (or Disks), also known as Redundant.
Oracle Data Guard Ensuring Disaster Recovery for Enterprise Data
Adam Duffy Edina Public Schools.  The heart of virtualization is the “virtual machine” (VM), a tightly isolated software container with an operating.
Windows Azure Conference 2014 Hybrid Cloud Storage: StorSimple and Windows Azure.
Business Continuity and DR, A Practical Implementation Mich Talebzadeh, Consultant, Deutsche Bank
Keith Burns Microsoft UK Mission Critical Database.
Microsoft Ignite /17/ :58 AM
Microsoft SQL Server x 46% 900+ For Hosting Service Providers
Yes, yes it does! 1.Guest Clustering is supported with SQL Server when running a guest operating system of Windows Server 2008 SP2 or newer.
Microsoft ® Application Virtualization 4.5 Infrastructure Planning and Design Series.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
National Manager Database Services
5205 – IT Service Delivery and Support
Implementing High Availability
Module 8 Implementing Backup and Recovery. Module Overview Planning Backup and Recovery Backing Up Exchange Server 2010 Restoring Exchange Server 2010.
Exchange 2010 Project Presentation/Discussion August 12, 2015 Project Team: Mark Dougherty – Design John Ditto – Project Manager Joel Eussen – Project.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
Ronen Gabbay Microsoft Regional Director Yside / Hi-Tech College
NovaBACKUP 10 xSP Technical Training By: Nathan Fouarge
IBM TotalStorage ® IBM logo must not be moved, added to, or altered in any way. © 2007 IBM Corporation Break through with IBM TotalStorage Business Continuity.
Microsoft ® Application Virtualization 4.6 Infrastructure Planning and Design Published: September 2008 Updated: February 2010.
November 2009 Network Disaster Recovery October 2014.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Cloud computing is the use of computing resources (hardware and software) that are delivered as a service over the Internet. Cloud is the metaphor for.
Microsoft ® Application Virtualization 4.6 Infrastructure Planning and Design Published: September 2008 Updated: November 2011.
Ross Smith IV Senior Program Manager, Exchange Server Microsoft Corporation SESSION CODE: UNC202 Kyryl Perederiy Senior Systems Engineer, Business Online.
Virtualization for Storage Efficiency and Centralized Management Genevieve Sullivan Hewlett-Packard
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
Module 9 Planning a Disaster Recovery Solution. Module Overview Planning for Disaster Mitigation Planning Exchange Server Backup Planning Exchange Server.
FlashSystem family 2014 © 2014 IBM Corporation IBM® FlashSystem™ V840 Product Overview.
Microsoft ® Exchange Server 2010 with Service Pack 1 Infrastructure Planning and Design Published: December 2010 Updated: July 2011.
Session objectives Discuss whether or not virtualization makes sense for Exchange 2013 Describe supportability of virtualization features Explain sizing.
Mark A. Magumba Storage Management. What is storage An electronic place where computer may store data and instructions for retrieval The objective of.
"1"1 Introduction to Managing Data " Describe problems associated with managing large numbers of disks " List requirements for easily managing large amounts.
Read/understand sizing, scalability, capacity guidance Documentation on technet, Exchange team blog, etc. Collect data on existing deployment.
Desktop Virtualization
Continuous Availability
VMware vSphere Configuration and Management v6
RAID Systems Ver.2.0 Jan 09, 2005 Syam. RAID Primer Redundant Array of Inexpensive Disks random, real-time, redundant, array, assembly, interconnected,
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
Scott Schnoll m Microsoft Corporation.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Jeff Mealiffe Sr. Program Manager Microsoft Corporation SESSION CODE: UNC301 Evan Morris Sr. Systems Engineer Hewlett-Packard.
BE-com.eu Brussel, 26 april 2016 EXCHANGE 2010 HYBRID (IN THE EXCHANGE 2016 WORLD)
CommVault Architecture
Shared infrastructure with redundant hardware components Exchange 2003 Commodity building blocks with software controlled redundancy Exchange 2010/2013/2016.
RAID.
Chapter 6: Securing the Cloud
Integrating Disk into Backup for Faster Restores
2016 Citrix presentation.
Introduction to Networks
Storage Virtualization
Introduction to Cloud Computing
Design Unit 26 Design a small or home office network
Dell Data Protection | Rapid Recovery: Simple, Quick, Configurable, and Affordable Cloud-Based Backup, Retention, and Archiving Powered by Microsoft Azure.
Microsoft TechNet Seminar 2006
Presentation transcript:

Exchange 2003 Shared infrastructure with redundant hardware components Exchange 2010/2013/2016 Commodity building blocks with software controlled redundancy New architecture and design principles I/O Meter How much disk performance Exchange database needs?

Failures *do* happen! Critical system dependencies decrease availability Deploy Multi-role servers Avoid intermediate and extra components (e.g. SAN; network teaming; archiving servers) Simpler design is always better: KISS Redundant components increase availability Multiple database copies Multiple balanced servers Failure domains combining redundant components decrease availability Examples: SAN; Blade chassis; Virtualization hosts Software, not hardware is driving the solution Exchange powered replication and managed availability Redundant transport and Safety Net Load balancing and proxying to the destination Availability principles: DAG beyond the “A”

Classical shared infrastructure design introduces numerous critical dependency components Relying on hardware requires expensive redundant components Failure domains reduce availability and introduce significant extra complexity

Scale the solution out, not in; more servers mean better availability Nearline SAS storage: provide large mailboxes by using large low cost drives Exchange I/O reduced by 93% since Exchange 2003 Exchange 2013/2016 database needs ~10 IOPS; single Nearline SAS disk provides 70+ IOPS; single 2.5” 15K rpm SAS disk provides 230+ IOPS 3+ redundant database copies eliminate the need for RAID and backups Redundant servers eliminate the need for redundant server components (e.g. NIC teaming or MPIO)

Google, Microsoft, Amazon, Yahoo! use commodity hardware for 10+ years already Not only for messaging but for other technologies as well (started with search, actually) Inexpensive commodity server and storage as a building block Easily replaceable, highly scalable, extremely cost efficient Software, not hardware is the brain of the solution Photo Credit: Stephen Shankland/CNET

Exchange Preferred Architecture: reference architecture from Exchange product group Exchange Product Line Architecture: Tightly scoped reference architecture offering from Microsoft Consulting Services Based on deployment best practices and customer experience Structured design based on detailed rule sets to avoid common mistakes and misconceptions Follows cornerstone design principles:  4 database copies across 2 sites; witness in 3 rd site  Unbound Service Site model (single namespace)  Multi-role servers  DAS storage with NL SAS or SATA JBOD  L7 load balancing (no session affinity)  Large low cost mailboxes (25/50 GB standard mailbox size)  Enable access for all internal / external clients  System Center for monitoring  Exchange Online Protection for messaging hygiene

Stretching your DAG Don’t multiply complexities

Still itching to use three sites? How are the users connected? With CAS role decoupled from MBX (logically! Not physically) it doesn’t matter… unless…

Goal: Provide symmetric database copy layout to ensure even load distribution Server3 Failure  Server6 Failure 

High Availability (HA) is redundancy of solution components within a datacenter Site Resilience (SR) is redundancy across datacenters providing a DR solution Both HA and SR are based on native Exchange data replication Each database exists in multiple copies, one of them active Data is shipped to passive copies via transaction log replication over the network It is possible to use dedicated isolated network for Exchange data replication Network requirements for replication: Each active  passive database replication stream generates X bandwidth The more database copies, the more bandwidth is required Exchange natively encrypts and compresses replication traffic Pros and cons for dedicated replication network => Not recommended Replication network can help isolating client traffic from replication traffic Replication network must be truly isolated along the entire data transfer path: having separate NICs but sharing the network path after the first switch is meaningless Replication network requires configuring static routes and eliminating cross talk; this leads to extra complexity and increases risk of human error If server NICs are 10Gbps capable, it’s easier to have a single network for everything No need for network teaming: think of a NIC as JBOD

Conceptually similar replication – goal is to introduce redundant copy of the data Software, not hardware powered  Application aware replication Enables each server and associated storage as independent isolated building block Exchange 2013 is capable of automatic reseed using hot spare (no manual actions besides replacing the failed disk!) Finally, cost factor: RAID1/0 requires 2x storage (you still want 4 database copies for Exchange availability)!

Exchange mailboxes will grow but they don’t consume that much on Day 1 The desire not to pay for full storage capacity upfront is understood However, inability to provision more storage and extend capacity quickly when needed is a big risk Successful thin provisioning requires significant operational maturity and process excellence unseen in the wild Microsoft guidance and best practice is to use thick provisioning with low cost storage Incremental provisioning model can be considered a reasonable compromise

Exchange continuous replication is a native transactional replication (based on transaction data shipping) Database itself is not replicated (transaction logs are played back to target database copy) Each transaction is checked for consistency and integrity before replay (hence physical corruption cannot propagate) Page patching is automatically activated for corrupted pages Replication data stream can be natively encrypted and compressed (both settings configurable, default is cross site only) In case of data loss Exchange automatically reseeds or resynchronizes the database (depending on type of loss) If hot spare disk is configured, Exchange automatically uses it for reseeding (like RAID rebuild does)

Two mirrored (RAID1) disks for system partition (OS; Exchange binaries, transport queues, logs) One hot spare disk Nine or more RBOD disks (single disk RAID-0) for Exchange databases with collocated transaction logs Four database copies collocated per disk, not to exceed 2TB database size

There are servers that can house more than 12 LFF disks (up to 16 with rear bay) There are already DAS enclosures available that provide 720 TB capacity in a single 4U unit (90 x 8TB drives)! Scalability limits: still 100 database copies / server This means no more than 25 4 databases/disk or 50 2 databases/disk

Introduces additional critical solution component  complexity and overhead Uses shared infrastructure concept and brings failure domains Consolidated roles is a guidance since Exchange 2010 – and now there is only a single role in Exchange 2016! Deploying multiple Exchange servers on the same host would create failure domain Hypervisor powered high availability is not needed with proper Exchange DAG designs No real benefits from Virtualization as Exchange provides equivalent benefits natively at the application level Could still make sense for small deployments helping consolidate workloads – but be careful with failure domains!

In today’s world, users access Exchange mailboxes from many clients and devices Cumulative client concurrency can be over 100% Penalty factors are measured in units of load created by a single Outlook client Some clients generate more server load than a baseline Outlook client Penalty factor should be calculated as weighted average across all types of clients

Four or more physical servers (collocated roles) in each DAG split symmetrically between two datacenter sites Four database copies (one lagged with lag replay manager) for HA and SR on DAS storage with JBOD; minimized failure domains Unbound Service site model with single unified load balanced namespace and Witness in the 3 rd datacenter

 There are many Exchange design options in the wild  Not everything that is supported is recommended  PA formulates cornerstone design principles  PLA presents a structured design based on strict rules  PA/PLA is based on field experience and Office 365 practice  Aligning with PA/PLA provides best co-existence with the cloud and simplifies Hybrid deployment  If you cannot follow the PA principles and PLA rules, you are not mature enough to run Exchange – MOVE TO Office 365! Exchange On-Premises Custom Design Exchange On-Premises recommended best practices Exchange On-Premises PA / PLA Design Exchange Online Public Cloud (Office 365) Exchange Online Public Cloud (Office 365)

Profile: Social: