1 Windows 2008 Failover Clustering Witness/Quorum Models.

Slides:



Advertisements
Similar presentations
0 - 0.
Advertisements

Symon Perriman Program Manager II Clustering & High-Availability Microsoft Corporation SESSION CODE: VIR303.
What's new?. ETS4 for Experts - New ETS4 Functions - improved Workflows - improvements in relation to ETS3.
Tasks in Setting Up a Hard Disk
Copyright © 2012 DataCore Software Corp. – All Rights Reserved. Practical High Availability NAS Cost-effective, non-stop disk access for clustered file.
1 Directed Depth First Search Adjacency Lists A: F G B: A H C: A D D: C F E: C D G F: E: G: : H: B: I: H: F A B C G D E H I.
© 2009 VMware Inc. All rights reserved Confidential Overview: vCenter Server Heartbeat Q
Site A But what if there is a catastrophic event? Fire, flood, earthquake … Same Physical Location.
Leaders Have Vision™ visionsolutions.com 1 Easy migration into the cloud Simple “on demand” disaster recovery With Double Take and HyperV Gabriel Chadeau.
4/11/2017 © 2014 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks.
High Availability Options for JD Edwards EnterpriseOne Shawn Scanlon, GSI.
1 © Copyright 2013 EMC Corporation. All rights reserved. with For Continuous Operations, VMware and Oracle RAC.
1 Disk Based Disaster Recovery & Data Replication Solutions Gavin Cole Storage Consultant SEE.
1 © Copyright 2010 EMC Corporation. All rights reserved. EMC RecoverPoint/Cluster Enabler for Microsoft Failover Cluster.
Keith Burns Microsoft UK Mission Critical Database.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC RECOVERPOINT/ CLUSTER ENABLER FOR MICROSOFT FAILOVER CLUSTER.
1© Copyright 2012 EMC Corporation. All rights reserved. November 2013 Oracle Continuous Availability – Technical Overview.
National Manager Database Services
Site B Site A SANSANSANSAN.
SharePoint Business Continuity Management with SQL Server AlwaysOn
Module 12: Planning for and Recovering from Disasters.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
Ronen Gabbay Microsoft Regional Director Yside / Hi-Tech College
Chapter 7 Configuring & Managing Distributed File System
Building Highly Available Systems with SQL Server™ 2005 Vineet Gupta Evangelist – Data and Integration Microsoft Corp.
Configuring File Services Lesson 6. Skills Matrix Technology SkillObjective DomainObjective # Configuring a File ServerConfigure a file server4.1 Using.
Chapter 10 : Designing a SQL Server 2005 Solution for High Availability MCITP Administrator: Microsoft SQL Server 2005 Database Server Infrastructure Design.
Get More out of SQL Server 2012 in the Microsoft Private Cloud environment Guy BowermanMadhan Arumugam DBI208.
Module 12: Designing High Availability in Windows Server ® 2008.
Understand Disk Types LESSON Windows Server Administration Fundamentals.
INSTALLING MICROSOFT EXCHANGE SERVER 2003 CLUSTERS AND FRONT-END AND BACK ‑ END SERVERS Chapter 4.
Chapter 18: Windows Server 2008 R2 and Active Directory Backup and Maintenance BAI617.
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Failover Clustering & Hyper-V: Multisite Disaster Recovery
DATABASE MIRRORING  Mirroring is mainly implemented for increasing the database availability.  Is configured on a Database level.  Mainly involves two.
SQLCAT: SQL Server 2012 AlwaysOn Lessons Learned from Early Customer Deployments Sanjay Mishra Program Manager Microsoft Corporation DBI360.
Continuous Access Overview Damian McNamara Consultant.
MCTS Guide to Microsoft Windows Vista Chapter 4 Managing Disks.
Site Power OutageNetwork Disconnect Node Shutdown for Patching Node Crash Quorum Witness Failure How do I make sure my Cluster stays up ??... Add/Evict.
SQLCAT: SQL Server HA and DR Design Patterns, Architectures, and Best Practices Using Microsoft SQL Server 2012 AlwaysOn Sanjay Mishra Program Manager.
Speaker Name 00/00/2013. Solution Requirements.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
High Availability in DB2 Nishant Sinha
Clustering Servers Chapter Seven. Exam Objectives in this Chapter:  Plan services for high availability Plan a high availability solution that uses clustering.
Alwayson Availability Groups
70-412: Configuring Advanced Windows Server 2012 services
Windows Server 2016 Cloud building brick Ljubo Brodarić Siemens CVC.
Enhancing Scalability and Availability of the Microsoft Application Platform Damir Bersinic Ruth Morton IT Pro Advisor Microsoft Canada
Configuring and Deploying Web Applications Lesson 7.
Hands-On Microsoft Windows Server 2008 Chapter 7 Configuring and Managing Data Storage.
SQL Server 2014 AlwaysOn Step-by-Step SQL Server 2014 AlwaysOn Step-by-Step A hands on look at implementing AlwaysOn in SQL Server 2014.
Appendix B Advanced Topics in Exchange Server 2010.
All the things you need to know before setting up AlwaysOn Michael Steineke SQL & BI Solution Lead Enterprise Architect Concurrency, Inc.
Windows Server Failover Clustering (WSFC) with SQL Server.
AlwaysOn In SQL Server 2012 Fadi Abdulwahab – SharePoint Administrator - 4/2013
Greg Seidel Technology Specialist – SQL Server
File Share Parameters File share resources can be normal shares, DFS roots, or Dynamic Shares. You configure file share permissions at the same time and.
Architecting Availability Groups
Disaster Recovery Where to Begin
Cluster Disks and Cluster File Storage
A Technical Overview of Microsoft® SQL Server™ 2005 High Availability Beta 2 Matthew Stephen IT Pro Evangelist (SQL Server)
Always On Availability Groups
Always On : Multi-site patterns
SQL Server High Availability Amit Vaid.
Introduction to Clustering
Architecting Availability Groups
Planning High Availability and Disaster Recovery
Always On : Multi-site patterns
Microsoft Virtual Academy
Designing Database Solutions for SQL Server
Presentation transcript:

1 Windows 2008 Failover Clustering Witness/Quorum Models

2 Node Majority* Can sustain failures of half the nodes (rounding up) minus one –Ex: a seven node cluster can sustain three node failures. No concurrently accessed disk required –Disks/LUNs in Resource Groups are not concurrently accessed Cluster stops if majority of nodes fail –Requires at least 3 nodes in cluster –Inappropriate for automatic failover in geographically distributed clusters Still deployed in environments that want humans to decide service location Recommended for clusters with odd number of nodes * Formerly Majority Node Set Cluster Status

3 Node Majority with Witness Disk* Can sustain failures of half the nodes (rounding up) if the disk witness remains online –Ex: a six node cluster in which the disk witness is online could sustain three node failures Can sustain failures of half the nodes (rounding up) minus one if the disk witness fails –Ex: a six node cluster with a failed disk witness could sustain two (3-1=2) node failures. Witness disk is concurrently accessed by all nodes –Acts as tiebreaker Witness disk can fail without affecting cluster operations –Usually used in 2-node clusters, or some geographically dispersed clusters Can work with SRDF/CE –64 clusters/VMAX pair limit Can work with VPLEX Does not work with: –RecoverPoint –MirrorView Recommended for clusters with even number of nodes * Formerly quorum disk Cluster Status

4 Witness Disk Can sustain failures of all nodes except one –Loss of witness disk stops the cluster Original Legacy cluster model for Windows until the introduction of Majority Node Set Witness disk is the only voter in the cluster –Failure of the witness leads to failure of the cluster Not generally recommended Cluster Status

5 Node Majority with File Share Witness (FSW) Can sustain failures of half the nodes (rounding up) if the FSW remains online –Ex: a six node cluster in which the disk witness is online could sustain three node failures Can sustain failures of half the nodes (rounding up) minus one if the FSW fails –Ex: a six node cluster with a failed disk witness could sustain two (3-1=2) node failures. Any CIFS share will work FSW is not a member of the cluster One host can serve multiple clusters as a witness FSW placement is important –Third failure domain –Or FSW itself can be made to automatically fail over Timing issues can be a challenge Works with no node limitations on: –SRDF/CE –MV/CE –RP/CE –VPLEX Recommended for most Geographically Distributed Clusters Cluster Status

6 Why is Geographically Distributed Clustering a special case? A two site configuration will always include a failure scenario that will result in majority loss This is often desired behavior –Sometimes desirable to have humans control the failover between sites –Failover is automated, but not automatic –Simple to restart the services on surviving nodes (force quorum) net start clussvc /fq If automatic failover between sites is required, deploy a FSW in a separate failure domain (third site)

7 Things to note Successful failover requires all disks in the resource group (RG) be available to the production node, including disks, requiring: –Replication between sites –A method to surface the replicated copies to the nodes in the DR site (Cluster Enabler), OR –A virtualization technique whereby the replicated LUNs are always available to the nodes in the DR site (VPLEX)

8 Multi-site configurations

9 Quorum Recommendations

10 FSW failure scenarios 3-site configuration – odd # voters Cluster Status

11 FSW failure scenarios Even # voters Cluster Status

12 FSW Failure Scenarios 2-site configuration – even # voters Cluster Status

13 FSW Failure Scenarios 2-site configuration – odd # voters Cluster Status

14 Node Weights Nodes can be altered to have no vote – cluster. node /prop NodeWeight=0 Useful when you have an unbalanced configuration Hotfix required –

15 Dealing with loss of quorum Its an outage that requires manual intervention to recover – net start clussvc /fq –Cluster returns unforced when a majority of nodes come back online Does not alter RPO of user data Rolling failures may result in reversion of cluster configuration –FSW does not store cluster database (same behavior as disk witness)

16 FSW Considerations FSW can be SMB 1.0 –No need to be same OS as member nodes –Same domain, same forest –Cannot be a member of the cluster –Can be hosted on NAS (VNX) 5 MB of free space If Windows, server should be dedicated to FSW –1 server can be witness to multiple clusters –Beware of dependencies Administrator must have full control share and NTFS perms No DFS

17 Changing the quorum configuration Process for FSW No downtime requirement –As long as there are a majority of nodes available Account requirements –Administrators group on each node member –Domain user account Create FSW Start Failover Cluster Manager